<> Thread pool Detailed explanation Emetic arrangement

Shuge : Do you use multithreading ?

I : Use it

Shuge : We don't need it at all , Come on

I :...... fierce

Shuge : How do you use it ?

I : Generally, thread pool is used , Do not create threads directly

Shuge : Thread pool ??? What are you doing ?

I :...... !!!!!!

<>1, Benefits of using thread pool

*
We know how to create

*
Reduces the number of threads created and destroyed , Every worker thread can be reused , Can perform multiple tasks .

*
According to the system's bearing capacity , Adjust the number of worker threads in the thread pool , Prevent excessive memory consumption , And get the server down

<>2, Thread pool creation , implement

<>2.1 Execution flow of thread pool

<>2.2 Creation and use of thread pool

<>2.2.1task Source code
class Task implements Callable { private int num1; private int num2; public
Task(){} public Task(int num1,int num2){ this.num1=num1; this.num2=num2; }
@Override public Integer call() throws Exception { int sum=0; for(int i=num1;i<=
num2;i++){ sum+=i; } return sum; } }
<>2.2.2 Thread pool usage
/** * calculation 400 number and * Use thread pool */ @Test public void pooltest() throws
ExecutionException, InterruptedException { int corePoolSize = 4; int
maximumPoolSize= 4; long keepAliveTime = 1000; TimeUnit unit = TimeUnit.
MICROSECONDS; BlockingQueue a = new SynchronousQueue(); // ArrayBlockingQueue
DelayQueue LinkedBlockingQueue PriorityBlockingQueue ThreadFactory threadFactory
= Executors.defaultThreadFactory(); RejectedExecutionHandler handler = new
ThreadPoolExecutor.AbortPolicy(); Create thread pool ThreadPoolExecutor threadPoolExecutor =
new ThreadPoolExecutor(corePoolSize, maximumPoolSize, keepAliveTime, unit, a,
threadFactory, handler); // establish task Task t1=new Task(1,100); Task t2=new Task(101,
200); Task t3=new Task(201,300); Task t4=new Task(301,400); //
Let thread pool choose a thread to execute thread task Future<Integer> f1=threadPoolExecutor.submit(t1); Future<
Integer> f2=threadPoolExecutor.submit(t2); Future<Integer> f3=threadPoolExecutor
.submit(t3); Future<Integer> f4=threadPoolExecutor.submit(t4);
threadPoolExecutor.shutdown(); int sum1=f1.get(); int sum2=f2.get(); int sum3=f3
.get(); int sum4=f4.get(); System.out.println(sum1+sum2+sum3+sum4); }
<>3, Parameters of thread pool Detailed explanation
/** * Creates a new {@code ThreadPoolExecutor} with the given initial *
parameters. * * @param corePoolSize the number of threads to keep in the pool,
even * if they are idle, unless {@code allowCoreThreadTimeOut} is set * @param
corePoolSize Is the number of threads that have survived in the thread pool , Even if these threads are idle , Unless parameters are set * {@code allowCoreThreadTimeOut} *
* @param maximumPoolSize the maximum number of threads to allow in the * pool *
@param maximumPoolSize What can survive in the process pool Maximum threads * * @param keepAliveTime when the number
of threads is greater than * the core, this is the maximum time that excess
idle threads * will wait for new tasks before terminating. * @param
keepAliveTime If the Threads ratio Number of core threads (corePoolSize) large * This parameter is Idle thread Waiting for new tasks Longest time * *
@param unit the time unit for the {@code keepAliveTime} argument * @param unit
{@code keepAliveTime} Parametric Time unit * * @param workQueue the queue to use for
holding tasks before they are * executed. This queue will hold only the {@code
Runnable} * tasks submitted by the {@code execute} method. * @param workQueue
This queue is used to store unimplemented tasks ( Mission accomplished Runnable Interface , And through excute Method submission ) * * @param threadFactory the
factory to use when the executor * creates a new thread * @param threadFactory
Thread pool is used to create factory * * @param handler the handler to use when execution is blocked *
because the thread bounds and queue capacities are reached * @param handler handle
Caused by thread boundary and queue capacity reached Perform blocking strategy * * @throws IllegalArgumentException if one of the
following holds:<br> * {@code corePoolSize < 0}<br> * {@code keepAliveTime <
0}<br> * {@code maximumPoolSize <= 0}<br> * {@code maximumPoolSize <
corePoolSize} * @throws NullPointerException if {@code workQueue} * or {@code
threadFactory} or {@code handler} is null */ public ThreadPoolExecutor(int
corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit,
BlockingQueue<Runnable> workQueue, ThreadFactory threadFactory,
RejectedExecutionHandler handler) { if (corePoolSize < 0 || maximumPoolSize <= 0
|| maximumPoolSize < corePoolSize || keepAliveTime < 0) throw new
IllegalArgumentException(); if (workQueue == null || threadFactory == null ||
handler== null) throw new NullPointerException(); this.corePoolSize =
corePoolSize; this.maximumPoolSize = maximumPoolSize; this.workQueue = workQueue
; this.keepAliveTime = unit.toNanos(keepAliveTime); this.threadFactory =
threadFactory; this.handler = handler; }
<>3.1 Blocking queue :BlockingQueue

ArrayBlockingQueue`, `DelayQueue`, `LinkedBlockingDeque`,
`LinkedBlockingQueue`, `LinkedTransferQueue`, `PriorityBlockingQueue`,
`SynchronousQueue
3.1.1 Unbounded queue

Unlimited queue size , Commonly used as boundless LinkedBlockingQueue, Be careful when using this queue as a blocking queue , When a task takes a long time, it may cause a large number of new tasks to pile up in the queue, which eventually leads to OOM. When QPS Very high , Big data sent , A lot of tasks are added to this boundless LinkedBlockingQueue
in , Cause memory surge server hang up .

3.1.2 Bounded queue

There are two common types , One is to follow FIFO The queue of principles is as follows ArrayBlockingQueue, The other is priority queues such as PriorityBlockingQueue.PriorityBlockingQueue Priority in is determined by the Comparator decision .
When using bounded queues, the queue size and thread pool size should match each other , Smaller thread pool and larger bounded queue can reduce memory consumption , reduce cpu Usage and context switching , But it may limit system throughput .

3.1.3 Synchronous handover queue

If you don't want the task to wait in the queue, you want to hand over the task directly to the worker thread , Available SynchronousQueue As a waiting queue .SynchronousQueue Not a real queue , It's a mechanism of thread handover . To put an element into SynchronousQueue in , There must be another thread waiting to receive this element . This queue is recommended only when an unbounded thread pool or saturation policy is used .

<>3.2 Saturation strategy

<>3.2.1ThreadPoolExecutor.AbortPolicy
<> describe
​ Direct throw anomaly , Do not handle
<> Source code /** * A handler for rejected tasks that throws a * {@link
RejectedExecutionException}. * * This is the default handler for {@link
ThreadPoolExecutor} and * {@link ScheduledThreadPoolExecutor}. */ public static
class AbortPolicy implements RejectedExecutionHandler { /** * Creates an {@code
AbortPolicy}. */ public AbortPolicy() { } /** * Always throws
RejectedExecutionException. * * @param r the runnable task requested to be
executed * @param e the executor attempting to execute this task * @throws
RejectedExecutionException always */ public void rejectedExecution(Runnable r,
ThreadPoolExecutor e) { throw new RejectedExecutionException("Task " +
r.toString() + " rejected from " + e.toString()); } }
<>3.2.2ThreadPoolExecutor.CallerRunsPolicy
<> describe
​ It's directly in {@code execute} Method to run a rejected task in the calling thread of the , Unless the executing program is closed , In this case, the task will be discarded
<> Source code /** * A handler for rejected tasks that runs the rejected task *
directly in the calling thread of the {@code execute} method, * unless the
executor has been shut down, in which case the task * is discarded. */ public
static class CallerRunsPolicy implements RejectedExecutionHandler { /** *
Creates a {@code CallerRunsPolicy}. */ public CallerRunsPolicy() { } /** *
Executes task r in the caller's thread, unless the executor * has been shut
down, in which case the task is discarded. * * @param r the runnable task
requested to be executed * @param e the executor attempting to execute this
task */ public void rejectedExecution(Runnable r, ThreadPoolExecutor e) { if
(!e.isShutdown()) { r.run(); } } }
<>3.2.3ThreadPoolExecutor.DiscardOldestPolicy
<> describe
​ Discard the task at the top of the queue , Then try the task again ( Not suitable for work queue as priority queue scenario )
<> Source code /** * A handler for rejected tasks that discards the oldest unhandled *
request and then retries {@code execute}, unless the executor * is shut down,
in which case the task is discarded. */ public static class DiscardOldestPolicy
implements RejectedExecutionHandler { /** * Creates a {@code
DiscardOldestPolicy} for the given executor. */ public DiscardOldestPolicy() { }
/** * Obtains and ignores the next task that the executor * would otherwise
execute, if one is immediately available, * and then retries execution of task
r, unless the executor * is shut down, in which case task r is instead
discarded. * * @param r the runnable task requested to be executed * @param e
the executor attempting to execute this task */ public void rejectedExecution(
Runnable r, ThreadPoolExecutor e) { if (!e.isShutdown()) { e.getQueue().poll();
e.execute(r); } } }
<>3.2.4ThreadPoolExecutor.DiscardPolicy
<> describe
​ Newly submitted task abandoned , But don't throw an exception
<> Source code /** * A handler for rejected tasks that silently discards the * rejected
task. */ public static class DiscardPolicy implements RejectedExecutionHandler {
/** * Creates a {@code DiscardPolicy}. */ public DiscardPolicy() { } /** * Does
nothing, which has the effect of discarding task r. * * @param r the runnable
task requested to be executed * @param e the executor attempting to execute
this task */ public void rejectedExecution(Runnable r, ThreadPoolExecutor e) { }
}
<>4,JDK Provide thread pool

<>4.1Executors.newScheduledThreadPool
/** * Creates a thread pool that can schedule commands to run after a * given
delay, or to execute periodically. * @param corePoolSize the number of threads
to keep in the pool, * even if they are idle * @param threadFactory the factory
to use when the executor * creates a new thread * @return the newly created
scheduled thread pool * @throws IllegalArgumentException if {@code corePoolSize
< 0} * @throws NullPointerException if threadFactory is null */ public static
ScheduledExecutorServicenewScheduledThreadPool( int corePoolSize, ThreadFactory
threadFactory) { return new ScheduledThreadPoolExecutor(corePoolSize,
threadFactory); } --------------------------------------------------------------
------------------------------------ /** * Creates a new {@code
ScheduledThreadPoolExecutor} with the * given initial parameters. * * @param
corePoolSize the number of threads to keep in the pool, even * if they are
idle, unless {@code allowCoreThreadTimeOut} is set * @param threadFactory the
factory to use when the executor * creates a new thread * @throws
IllegalArgumentException if {@code corePoolSize < 0} * @throws
NullPointerException if {@code threadFactory} is null */ public
ScheduledThreadPoolExecutor(int corePoolSize, ThreadFactory threadFactory) {
super(corePoolSize, Integer.MAX_VALUE, DEFAULT_KEEPALIVE_MILLIS, MILLISECONDS,
new DelayedWorkQueue(), threadFactory); }
<>4.2Executors.newCachedThreadPool
/** * Creates a thread pool that creates new threads as needed, but * will
reuse previously constructed threads when they are * available, and uses the
provided * ThreadFactory to create new threads when needed. * * @param
threadFactory the factory to use when creating new threads * @return the newly
created thread pool * @throws NullPointerException if threadFactory is null */
public static ExecutorService newCachedThreadPool(ThreadFactory threadFactory) {
return new ThreadPoolExecutor(0, Integer.MAX_VALUE, 60L, TimeUnit.SECONDS, new
SynchronousQueue<Runnable>(), threadFactory); }
<>4.3Executors.newFixedThreadPool
/** * Creates a thread pool that reuses a fixed number of threads * operating
off a shared unbounded queue, using the provided * ThreadFactory to create new
threads when needed. At any point, * at most {@code nThreads} threads will be
active processing * tasks. If additional tasks are submitted when all threads
are * active, they will wait in the queue until a thread is * available. If any
thread terminates due to a failure during * execution prior to shutdown, a new
one will take its place if * needed to execute subsequent tasks. The threads in
the pool will * exist until it is explicitly {@link ExecutorService#shutdown *
shutdown}. * * @param nThreads the number of threads in the pool * @param
threadFactory the factory to use when creating new threads * @return the newly
created thread pool * @throws NullPointerException if threadFactory is null *
@throws IllegalArgumentException if {@code nThreads <= 0} */ public static
ExecutorServicenewFixedThreadPool(int nThreads, ThreadFactory threadFactory) {
return new ThreadPoolExecutor(nThreads, nThreads, 0L, TimeUnit.MILLISECONDS, new
LinkedBlockingQueue<Runnable>(), threadFactory); }
<>4.4Executors.newSingleThreadExecutor
/** * Creates an Executor that uses a single worker thread operating * off an
unbounded queue, and uses the provided ThreadFactory to * create a new thread
when needed. Unlike the otherwise * equivalent {@code newFixedThreadPool(1,
threadFactory)} the * returned executor is guaranteed not to be reconfigurable
to use * additional threads. * * @param threadFactory the factory to use when
creating new threads * @return the newly created single-threaded Executor *
@throws NullPointerException if threadFactory is null */ public static
ExecutorServicenewSingleThreadExecutor(ThreadFactory threadFactory) { return new
FinalizableDelegatedExecutorService (new ThreadPoolExecutor(1, 1, 0L, TimeUnit.
MILLISECONDS, new LinkedBlockingQueue<Runnable>(), threadFactory)); }
<>4.5Executors.newSingleThreadScheduledExecutor
/** * Creates a single-threaded executor that can schedule commands * to run
after a given delay, or to execute periodically. (Note * however that if this
single thread terminates due to a failure * during execution prior to shutdown,
a new one will take its * place if needed to execute subsequent tasks.) Tasks
are * guaranteed to execute sequentially, and no more than one task * will be
active at any given time. Unlike the otherwise * equivalent {@code
newScheduledThreadPool(1, threadFactory)} * the returned executor is guaranteed
not to be reconfigurable to * use additional threads. * * @param threadFactory
the factory to use when creating new threads * @return the newly created
scheduled executor * @throws NullPointerException if threadFactory is null */
public static ScheduledExecutorService newSingleThreadScheduledExecutor(
ThreadFactory threadFactory) { return new DelegatedScheduledExecutorService (new
ScheduledThreadPoolExecutor(1, threadFactory)); }
<>4.6Executors.newWorkStealingPool
/** * Creates a thread pool that maintains enough threads to support * the
given parallelism level, and may use multiple queues to * reduce contention.
The parallelism level corresponds to the * maximum number of threads actively
engaged in, or available to * engage in, task processing. The actual number of
threads may * grow and shrink dynamically. A work-stealing pool makes no *
guarantees about the order in which submitted tasks are * executed. * * @param
parallelism the targeted parallelism level * @return the newly created thread
pool * @throws IllegalArgumentException if {@code parallelism <= 0} * @since
1.8 */ public static ExecutorService newWorkStealingPool(int parallelism) {
return new ForkJoinPool (parallelism, ForkJoinPool.
defaultForkJoinWorkerThreadFactory, null, true); } /** * Creates a
work-stealing thread pool using the number of * {@linkplain
Runtime#availableProcessors available processors} * as its target parallelism
level. * * @return the newly created thread pool * @see
#newWorkStealingPool(int) * @since 1.8 */ public static ExecutorService
newWorkStealingPool() { return new ForkJoinPool (Runtime.getRuntime().
availableProcessors(), ForkJoinPool.defaultForkJoinWorkerThreadFactory, null,
true); }
<>Alibaba Coding protocol

One -( Six )- 4

【 force 】 Thread pool not allowed Executors To create , But through ThreadPoolExecutor How , this
To make the students write more clear about the running rules of the thread pool , Avoiding the risk of resource exhaustion .
explain : Executors The disadvantages of the returned thread pool object are as follows :
1) FixedThreadPool and SingleThreadPool:
The allowed request queue length is Integer.MAX_VALUE, A large number of requests may pile up , Which leads to OOM.
2) CachedThreadPool:
The number of creation threads allowed is Integer.MAX_VALUE, A large number of threads may be created , Which leads to OOM.

If there is any problem , Welcome to comment , Will accept with modesty , Supplementary learning
Please , give the thumbs-up Collection , thank you

Technology
©2019-2020 Toolsou All rights reserved,
One and a half years JAVA Summary of work experience Jsp+Ajax+Servlet+Mysql Add, delete, modify and query ( one ) cartoon | CPU Warfare 40 year , The real king finally appeared ! Don't annoy the panda with any cat !「 Kung Fu Panda 」20 It's the year of man 4 blood IAR Installation and use tutorial Classical algorithm - recursion ( The case of raw rabbit ) Thorough explanation from Zhongtai Random forest R Language implementation R Language cluster analysis case These songs , Programmers, don't listen !