Remember a few days ago, a little friend gave me feedback about the exception that the request data was too large when sending a message ? Adjusted max.request.size After the size of , The following is an exception :

After checking the relevant information , find Broker End to end Producer There are also limits on the size of messages sent , This parameter is called message.max.bytes, This parameter determines
Broker The size of the maximum message that can be received , Its default value is 977 KB, and max.request.size The value of has been set to 2M It's big , It's obviously better than that
message.max.bytes It's a lot bigger , So the message is greater than 997KB Time , The above exception will be thrown .

It is worth mentioning that , Theme configuration also has a parameter , call max.message.bytes, It works only on one subject , Dynamically configurable , Global covering
message.max.bytes, The advantage is that it can be set for different themes Broker The size of the received message , And you don't have to restart Broker.

It's not over , The size of the message data pulled by the consumer also needs to be changed , This parameter is called fetch.max.bytes, This parameter determines the consumer's single Broker
Gets the maximum number of bytes of a message , So here comes the question... , If the parameter value ratio max.request.size Small , That will lead to the possibility that consumers can not spend more than fetch.max.bytes
Big news .

So in a nutshell , It needs to be set like this :
producer end : max.request.size=5242880(5M) broker: message.max.bytes=6291456(6M)
consumer: fetch.max.bytes=7340032(7M) max.request.size < message.max.bytes <
One more thing to add , Remember what I said before batch.size Does the parameter work , It can be seen from the source code ,Producer Each message sent is encapsulated into
ProducerRecord, The message accumulator is then used RecordAccumulator Add to ProducerBatch in , Because every time you create
ProducerBatch All need to be assigned one batch.size Size of memory space , Frequent creation and shutdown can lead to high performance overhead , therefore RecordAccumulator
There is one inside BufferPool, It realizes the reuse of cache , It's just aimed at batch.size Size BufferByte Reuse , If greater than batch.size
Of ProducerBatch, It will not join BufferPool in , It won't be reused .

There was a question before : If max.request.size greater than batch.size, Will the message be divided into several parts batch Send to broker?

Obviously not , According to the above , If one ProducerRecord It's beyond that batch.size Size of , that ProducerBatch
Contains only one ProducerRecord, And should ProducerBatch Will not be added to BufferPool in .

therefore , stay Kafka Producer In the process of tuning , According to business requirements , Special attention is needed batch.size And max.request.size
Between the size of the value set , Avoid frequent creation and shutdown of memory space .

Recent hot news

Long press to subscribe

©2019-2020 Toolsou All rights reserved,
Digital rolling lottery program Keras Save and load model (JSON+HDF5) Remember once EventBus Project issues caused by memory leaks I've been drinking soft water for three years ? What is the use of soft water and water softener msf Generate Trojan horse attack android mobile phone Time conversion front desk will 2020-07-17T03:07:02.000+0000 Into 2020-07-17 11:07:02 Chuan Shen 1190 Reverses the substring between each pair of parentheses leetcodehive Summary of processing methods for a large number of small files SparkSQL Achieve partition overlay write Image format conversion