您好,登錄后才能下訂單哦!
這篇文章主要介紹了Disruptor、Kafka、Netty如何整合,具有一定借鑒價值,感興趣的朋友可以參考下,希望大家閱讀完這篇文章之后大有收獲,下面讓小編帶著大家一起了解一下。
整個網關的核心是一個netty server,各個應用程序(包括web server,手機app等)連到這個netty server上請求數據;關于數據來源,需要監聽多個kafka topic(而且這里的topic是可變的,也就是說需要kafka consumer的動態開始和停止),之后需要把所有這些topic的數據整合在一起,通過channel發送給客戶端應用程序。
下面把大部分的代碼貼出來,有需要的同學可以參考。會對關鍵的技術點進行說明,偏業務部分大家自行忽略吧。
啟動disruptor;監聽一個固定的topic,把獲取到的msg,交給ConsumerProcessorGroup來完成kafka consumer的創建和停止。
public static void main(String[] args) {
DisruptorHelper.getInstance().start();
Properties props = ConsumerProps.getConsumerProps();
KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
consumer.subscribe(Arrays.asList("uavlst"));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
ConsumerRecord<String, String> lastRecord = null;
for (ConsumerRecord<String, String> record : records)
lastRecord = record;
if (lastRecord != null){
ConsumerProcessorGroup.getInstance().recieveNewUavLst(lastRecord.value());
}
}
}
DisruptorHelper是一個單例,主要是包含了一個disruptor 對象,在new這個對象的時候,用到了ProducerType.MULTI和new BlockingWaitStrategy(),其中前者意味著我們需要多個producer共同來工作,后者其實是默認的producer的等待策略,后續根據實際情況進行調整。
public class DisruptorHelper {
private static DisruptorHelper instance = null;
public static DisruptorHelper getInstance() {
if (instance == null) {
instance = new DisruptorHelper();
}
return instance;
}
private final int BUFFER_SIZE = 1024;
private Disruptor<MsgEvent> disruptor = null;
private DisruptorHelper() {
MsgEventHandler eventHandler = new MsgEventHandler();
disruptor = new Disruptor(new MsgEventFactory(), BUFFER_SIZE, new ConsumerThreadFactory(), ProducerType.MULTI, new BlockingWaitStrategy());
disruptor.handleEventsWith(eventHandler);
}
public void start() {
disruptor.start();
}
public void shutdown() {
disruptor.shutdown();
}
public void produce(ConsumerRecord<String, String> record) {
RingBuffer<MsgEvent> ringBuffer = disruptor.getRingBuffer();
long sequence = ringBuffer.next();
try {
ringBuffer.get(sequence).setRecord(record);
} finally {
ringBuffer.publish(sequence);
}
}
}
ConsumerProcessorGroup是一個單例,當中包含一個fixedThreadPool,動態的啟動線程來進行kafka topic的消費。
public class ConsumerProcessorGroup {
private static ConsumerProcessorGroup instance = null;
public static ConsumerProcessorGroup getInstance(){
if (instance == null){
instance = new ConsumerProcessorGroup();
}
return instance;
}
private ConsumerProcessorGroup() {
}
private ExecutorService fixedThreadPool = Executors.newFixedThreadPool(20);
public List<String> uavIDLst = new Vector<String>();
public void recieveNewUavLst(String uavIDs){
List<String> newUavIDs = Arrays.asList(uavIDs.split(","));
for (String uavID : newUavIDs){
if (!uavIDLst.contains(uavID)){
fixedThreadPool.execute(new ConsumerThread(uavID));
uavIDLst.add(uavID);
}
}
List<String> tmpLstForDel = new ArrayList<String>();
for (String uavID : uavIDLst){
if (!newUavIDs.contains(uavID)){
tmpLstForDel.add(uavID);
}
}
uavIDLst.removeAll(tmpLstForDel);
}
}
對kafka topic進行消費,通過DisruptorHelper將獲取的record寫入disruptor的ring buffer當中。
public class ConsumerThread implements Runnable {
private String uavID;
public ConsumerThread(String uavID) {
this.uavID = uavID;
}
public void run() {
Properties props = ConsumerProps.getConsumerProps();
KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
consumer.subscribe(Arrays.asList(uavID));
System.out.println(uavID + " consumer started! Current thread id is " + Thread.currentThread().getId());
while (ConsumerProcessorGroup.getInstance().uavIDLst.contains(uavID)) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records){
DisruptorHelper.getInstance().produce(record);
}
}
System.out.println(uavID + " consumer finished! Current thread id is " + Thread.currentThread().getId());
}
}
Disruptor的消費者,依次從Ring Buffer當中讀取數據并執行相應的處理。
public class MsgEventHandler implements EventHandler<MsgEvent> {
private Map<Integer, String> converterMap;
public void onEvent(MsgEvent event, long sequence, boolean endOfBatch) throws Exception {
ConsumerRecord<String, String> record = event.getRecord();
System.out.printf("topic = %s, part = %d, offset = %d, key = %s, value = %s \n\r", record.topic(), record.partition(), record.offset(), record.key(), record.value());
}
}
感謝你能夠認真閱讀完這篇文章,希望小編分享的“Disruptor、Kafka、Netty如何整合”這篇文章對大家有幫助,同時也希望大家多多支持億速云,關注億速云行業資訊頻道,更多相關知識等著你來學習!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。