您好,登錄后才能下訂單哦!
本篇內容主要講解“io請求處理過程是什么”,感興趣的朋友不妨來看看。本文介紹的方法操作簡單快捷,實用性強。下面就讓小編來帶大家學習“io請求處理過程是什么”吧!
上篇講到,netty啟動起來之后,就會有很多個eventloop線程會一直在循環工作(server通用特性),比如進行select或者執行task. 我們再來回顧 NioEventLoop 的實現方式吧!
我們先看看下 NioEventLoop 的類圖吧:
看起來非常復雜,不管它。它核心方法自然是 run();
// io.netty.channel.nio.NioEventLoop#run @Overrideprotected void run() {// 一個死循環檢測任務, 這就 eventloop 的大殺器哦for (;;) {try {switch (selectStrategy.calculateStrategy(selectNowSupplier, hasTasks())) {case SelectStrategy.CONTINUE:continue;// 有任務時執行任務, 否則阻塞等待網絡事件, 或被喚醒case SelectStrategy.SELECT:// select.select(), 帶超時限制select(wakenUp.getAndSet(false));// 'wakenUp.compareAndSet(false, true)' is always evaluated// before calling 'selector.wakeup()' to reduce the wake-up// overhead. (Selector.wakeup() is an expensive operation.)//// However, there is a race condition in this approach.// The race condition is triggered when 'wakenUp' is set to// true too early.//// 'wakenUp' is set to true too early if:// 1) Selector is waken up between 'wakenUp.set(false)' and// 'selector.select(...)'. (BAD)// 2) Selector is waken up between 'selector.select(...)' and// 'if (wakenUp.get()) { ... }'. (OK)//// In the first case, 'wakenUp' is set to true and the// following 'selector.select(...)' will wake up immediately.// Until 'wakenUp' is set to false again in the next round,// 'wakenUp.compareAndSet(false, true)' will fail, and therefore// any attempt to wake up the Selector will fail, too, causing// the following 'selector.select(...)' call to block// unnecessarily.//// To fix this problem, we wake up the selector again if wakenUp// is true immediately after selector.select(...).// It is inefficient in that it wakes up the selector for both// the first case (BAD - wake-up required) and the second case// (OK - no wake-up required).if (wakenUp.get()) { selector.wakeup(); }// fall throughdefault: } cancelledKeys = 0; needsToSelectAgain = false;// ioRatio 為io操作的占比, 和運行任務相比, 默認為 50:50final int ioRatio = this.ioRatio;if (ioRatio == 100) {try {// step1. 運行io操作 processSelectedKeys(); } finally {// Ensure we always run tasks.// step2. 運行task任務 runAllTasks(); } } else {final long ioStartTime = System.nanoTime();try { processSelectedKeys(); } finally {// Ensure we always run tasks.final long ioTime = System.nanoTime() - ioStartTime;// 運行任務的最長時間runAllTasks(ioTime * (100 - ioRatio) / ioRatio); } } } catch (Throwable t) { handleLoopException(t); }// Always handle shutdown even if the loop processing threw an exception.try {if (isShuttingDown()) { closeAll();if (confirmShutdown()) {return; } } } catch (Throwable t) { handleLoopException(t); } } }// select, 事件循環的依據private void select(boolean oldWakenUp) throws IOException { Selector selector = this.selector;try {int selectCnt = 0;long currentTimeNanos = System.nanoTime();// 帶超時限制, 默認最大超時1s, 但當有延時任務處理時, 以它為標準long selectDeadLineNanos = currentTimeNanos + delayNanos(currentTimeNanos);for (;;) {long timeoutMillis = (selectDeadLineNanos - currentTimeNanos + 500000L) / 1000000L;if (timeoutMillis <= 0) {// 超時則立即返回if (selectCnt == 0) { selector.selectNow(); selectCnt = 1; }break; }// If a task was submitted when wakenUp value was true, the task didn't get a chance to call// Selector#wakeup. So we need to check task queue again before executing select operation.// If we don't, the task might be pended until select operation was timed out.// It might be pended until idle timeout if IdleStateHandler existed in pipeline.if (hasTasks() && wakenUp.compareAndSet(false, true)) { selector.selectNow(); selectCnt = 1;break; }int selectedKeys = selector.select(timeoutMillis); selectCnt ++;if (selectedKeys != 0 || oldWakenUp || wakenUp.get() || hasTasks() || hasScheduledTasks()) {// - Selected something,// - waken up by user, or// - the task queue has a pending task.// - a scheduled task is ready for processingbreak; }if (Thread.interrupted()) {// Thread was interrupted so reset selected keys and break so we not run into a busy loop.// As this is most likely a bug in the handler of the user or it's client library we will// also log it.//// See https://github.com/netty/netty/issues/2426if (logger.isDebugEnabled()) { logger.debug("Selector.select() returned prematurely because ">); } selectCnt = 1;break; }long time = System.nanoTime();if (time - TimeUnit.MILLISECONDS.toNanos(timeoutMillis) >= currentTimeNanos) {// timeoutMillis elapsed without anything selected.selectCnt = 1; } else if (SELECTOR_AUTO_REBUILD_THRESHOLD > 0 &&selectCnt >= SELECTOR_AUTO_REBUILD_THRESHOLD) {// The selector returned prematurely many times in a row.// Rebuild the selector to work around the problem. logger.warn("Selector.select() returned prematurely {} times in a row; rebuilding Selector {}.", selectCnt, selector); rebuildSelector(); selector = this.selector;// Select again to populate selectedKeys. selector.selectNow(); selectCnt = 1;break; } currentTimeNanos = time; }if (selectCnt > MIN_PREMATURE_SELECTOR_RETURNS) {if (logger.isDebugEnabled()) { logger.debug("Selector.select() returned prematurely {} times in a row for Selector {}.", selectCnt - 1, selector); } } } catch (CancelledKeyException e) {if (logger.isDebugEnabled()) { logger.debug(CancelledKeyException.class.getSimpleName() + " raised by a Selector {} - JDK bug?", selector, e); }// Harmless exception - log anyway } }
大體來說就是:eventloop是一個一直在運行的線程,它會不停地檢測是否發生了網絡事件或者被提交上來了新任務,如果有那么就會去執行這些任務。
在處理io事件和task時,為防止調度的饑餓問題,它設置了一個ioRatio來避免發生。即如果io事件占用了ioTime時間,那么task也應該占用相應剩下比例的時間,以保持公平性。
在實現上,發現網絡io事件是通過 selector.select()的,而發現task任務是通過 hasTasks()來實現檢測的。每檢測一次,一般不超過1s的休眠時間,以免在特殊情況下發生意外而導致系統假死。
io操作主要就是監控一些網絡事件,比如新連接請求,請請求,寫請求,關閉請求等。它是一個網絡應用的非常核心的功能之一。從eventloop的核心循環中,我們看到其 processSelectedKeys() 就做網絡io事件處理的。
// io.netty.channel.nio.NioEventLoop#processSelectedKeysprivate void processSelectedKeys() {// selectedKeys 為前面進行bind()時初始化掉的,所以不會為空if (selectedKeys != null) { processSelectedKeysOptimized(); } else { processSelectedKeysPlain(selector.selectedKeys()); } } private void processSelectedKeysOptimized() {// 當無網絡事件發生時,selectedKeys.size=0, 不會發生處理行為for (int i = 0; i < selectedKeys.size; ++i) {// 當有網絡事件發生時,selectedKeys 為各就緒事件final SelectionKey k = selectedKeys.keys[i];// null out entry in the array to allow to have it GC'ed once the Channel close// See https://github.com/netty/netty/issues/2363selectedKeys.keys[i] = null;final Object a = k.attachment();if (a instanceof AbstractNioChannel) {// 轉換成相應的channel, 調用 processSelectedKey(k, (AbstractNioChannel) a); } else { @SuppressWarnings("unchecked") NioTask<SelectableChannel> task = (NioTask<SelectableChannel>) a; processSelectedKey(k, task); }if (needsToSelectAgain) {// null out entries in the array to allow to have it GC'ed once the Channel close// See https://github.com/netty/netty/issues/2363selectedKeys.reset(i + 1); selectAgain(); i = -1; } } }// 處理具體的socketprivate void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {final AbstractNioChannel.NioUnsafe unsafe = ch.unsafe();// if (!k.isValid()) {final EventLoop eventLoop;try { eventLoop = ch.eventLoop(); } catch (Throwable ignored) {// If the channel implementation throws an exception because there is no event loop, we ignore this// because we are only trying to determine if ch is registered to this event loop and thus has authority// to close ch.return; }// Only close ch if ch is still registered to this EventLoop. ch could have deregistered from the event loop// and thus the SelectionKey could be cancelled as part of the deregistration process, but the channel is// still healthy and should not be closed.// See https://github.com/netty/netty/issues/5125if (eventLoop != this || eventLoop == null) {return; }// close the channel if the key is not valid anymore unsafe.close(unsafe.voidPromise());return; }try {// 取出就緒事件類型進行判斷int readyOps = k.readyOps();// We first need to call finishConnect() before try to trigger a read(...) or write(...) as otherwise// the NIO JDK channel implementation may throw a NotYetConnectedException.// 如果是連接事件,則先進行連接操作,觸發 finishConnect() 事件鏈if ((readyOps & SelectionKey.OP_CONNECT) != 0) {// remove OP_CONNECT as otherwise Selector.select(..) will always return without blocking// See https://github.com/netty/netty/issues/924int ops = k.interestOps(); ops &= ~SelectionKey.OP_CONNECT; k.interestOps(ops); unsafe.finishConnect(); }// Process OP_WRITE first as we may be able to write some queued buffers and so free memory.// 如果是寫事件,則強制channel寫數據if ((readyOps & SelectionKey.OP_WRITE) != 0) {// Call forceFlush which will also take care of clear the OP_WRITE once there is nothing left to write ch.unsafe().forceFlush(); }// Also check for readOps of 0 to workaround possible JDK bug which may otherwise lead// to a spin loopif ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {// 讀取數據, OP_READ, OP_ACCEPT 會進入到此處,事件處理從此開始 unsafe.read(); } } catch (CancelledKeyException ignored) { unsafe.close(unsafe.voidPromise()); } }// io.netty.channel.nio.AbstractNioMessageChannel.NioMessageUnsafe#read @Overridepublic void read() {// 此處斷言,只有io線程本身才可以進行read()操作,如果被其他線程執行,那就是有問題的assert eventLoop().inEventLoop();// 取出config, Pipeline...final ChannelConfig config = config();final ChannelPipeline pipeline = pipeline();// 調用 allocator 分配接收內存, io.netty.channel.AdaptiveRecvByteBufAllocator.HandleImpl// 并重置讀取狀態final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle(); allocHandle.reset(config);boolean closed = false; Throwable exception = null;try {try {do {// 1. 初步讀取數據int localRead = doReadMessages(readBuf);if (localRead == 0) {break; }if (localRead < 0) { closed = true;break; } allocHandle.incMessagesRead(localRead);// 通過allocHandle判定是否已讀取數據完成} while (allocHandle.continueReading()); } catch (Throwable t) { exception = t; }int size = readBuf.size();for (int i = 0; i < size; i ++) { readPending = false;// 2. 事件通知: fireChannelRead(), accept() 之后的channel作為數據源傳入pipeline中// 此 pipeline 結構為 head -> ServerBootstrapAcceptor -> tail pipeline.fireChannelRead(readBuf.get(i)); } readBuf.clear(); allocHandle.readComplete();// 事件通知: channelReadComplete()// 注意,此時read操作極有可能還未完成,而此進進行 complete 操作是否為時過早呢?// 是的,但是不用擔心,eventLoop可以保證先提交的事件會先執行,所以這里就只管放心提交吧// 這也是accept不會阻塞eventLoop的原因,即雖然大家同在 eventLoop 上,但是accept很快就返回了 pipeline.fireChannelReadComplete();if (exception != null) { closed = closeOnReadError(exception); pipeline.fireExceptionCaught(exception); }if (closed) { inputShutdown = true;if (isOpen()) { close(voidPromise()); } } } finally {// Check if there is a readPending which was not processed yet.// This could be for two reasons:// * The user called Channel.read() or ChannelHandlerContext.read() in channelRead(...) method// * The user called Channel.read() or ChannelHandlerContext.read() in channelReadComplete(...) method//// See https://github.com/netty/netty/issues/2254if (!readPending && !config.isAutoRead()) { removeReadOp(); } } } }
以上是處理一條io事件的大體流程:
1. 調用 AdaptiveRecvByteBufAllocator 分配一個新的 ByteBuf, 用于接收新數據;
2. 調用 doReadMessages() 轉到 accept() 接收socket進來, 存入 ByteBuf 備用;
3. 對接入的socket, 調用pipeline.fireChannelRead(), 處理讀過程;
4. 調用pipeline.fireChannelReadComplete() 方法,觸發read完成事件;
5. 異常處理;
注意,當前運行的線程是在bossGroup中,它的pipeline是相對固定的,即只有head -> acceptor -> tail, 而我們的handler是在childGroup中的,所以我們只能再等等咯。
下面我們就來細分解下這幾個步驟!
在調用AdaptiveRecvByteBufAllocator, 分配一個新的 allocHandle 之后,就進行socket的接入,實際上就是調用 serverSocketChannel.accept() 方法, 初步讀取數據。來看下!
// 處理預備 allocHandle, 以便進行判定是否數據讀取完成// io.netty.channel.AbstractChannel.AbstractUnsafe#recvBufAllocHandle @Overridepublic RecvByteBufAllocator.Handle recvBufAllocHandle() {if (recvHandle == null) { recvHandle = config().getRecvByteBufAllocator().newHandle(); }return recvHandle; }// 重置讀取狀態// io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator.MaxMessageHandle#reset @Overridepublic void reset(ChannelConfig config) {this.config = config; maxMessagePerRead = maxMessagesPerRead(); totalMessages = totalBytesRead = 0; }// 通過allocHandle判定是否已讀取數據完成// io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator.MaxMessageHandle#continueReading() @Overridepublic boolean continueReading() {return continueReading(defaultMaybeMoreSupplier); } @Overridepublic boolean continueReading(UncheckedBooleanSupplier maybeMoreDataSupplier) {return config.isAutoRead() && (!respectMaybeMoreData || maybeMoreDataSupplier.get()) &&// accept 時, totalMessages = 1, 此條件必成立。// 但totalBytesRead=0, 所以必然返回false, 還需要繼續讀數據 totalMessages < maxMessagePerRead && totalBytesRead > 0; }// accept 新的socket @Overrideprotected int doReadMessages(List<Object> buf) throws Exception {// 也就是說, 對于netty而言, 是先知道有事件到來, 然后才去調用 accept() 方法的// 而accept() 方法則是會阻塞當前線程的喲, 但此時select()已經喚醒, 所以也意味著數據已經準備就緒,此處將會立即返回了SocketChannel ch = SocketUtils.accept(javaChannel());try {if (ch != null) {// 將當前注冊的accept() 添加的buf結果中buf.add(new NioSocketChannel(this, ch));return 1; } } catch (Throwable t) { logger.warn("Failed to create a new channel from an accepted socket.", t);try { ch.close(); } catch (Throwable t2) { logger.warn("Failed to close a socket.", t2); } }return 0; }// io.netty.util.internal.SocketUtils#acceptpublic static SocketChannel accept(final ServerSocketChannel serverSocketChannel) throws IOException {try {return AccessController.doPrivileged(new PrivilegedExceptionAction<SocketChannel>() { @Overridepublic SocketChannel run() throws IOException {return serverSocketChannel.accept(); } }); } catch (PrivilegedActionException e) {throw (IOException) e.getCause(); } }
將新接入的socket封裝成 NioSocketChannel 后, 添加到 readBuf 中, 進入下一步.
socket 接入完成后, 會依次讀取數據. (所以, 前面會同時接入多個 socket ??) pipeline 機制正式上場. 此時pipeline中有head,acceptor,tail, 但只有acceptor會真正處理數據.
// channelRead() 事件通知, 從 head 開始, 由 acceptor 處理// io.netty.channel.DefaultChannelPipeline#fireChannelRead @Overridepublic final ChannelPipeline fireChannelRead(Object msg) {// 將pipeline中的head節點作為起始channelHandler傳入,處理消息// head 實現: efaultChannelPipeline.HeadContext, 它既能處理 inbound, 也能處理 outbound 數據。 // 即其實現了 ChannelOutboundHandler, ChannelInboundHandler AbstractChannelHandlerContext.invokeChannelRead(head, msg);return this; }// io.netty.channel.AbstractChannelHandlerContext#invokeChannelRead(io.netty.channel.AbstractChannelHandlerContext, java.lang.Object)static void invokeChannelRead(final AbstractChannelHandlerContext next, Object msg) {// 此處也是一個擴展點, 如果該channel實現了 ReferenceCounted, 則創建一個新的 ReferenceCounted msg 包裝, 并調用其touch 方法final Object m = next.pipeline.touch(ObjectUtil.checkNotNull(msg, "msg"), next); EventExecutor executor = next.executor();if (executor.inEventLoop()) {// 當前事件循環發現的數據,直接走此處 next.invokeChannelRead(m); } else { executor.execute(new Runnable() { @Overridepublic void run() { next.invokeChannelRead(m); } }); } }// io.netty.channel.AbstractChannelHandlerContext#invokeChannelRead(java.lang.Object)private void invokeChannelRead(Object msg) {if (invokeHandler()) {try {// 開始調用真正的 channelRead()((ChannelInboundHandler) handler()).channelRead(this, msg); } catch (Throwable t) { notifyHandlerException(t); } } else { fireChannelRead(msg); } }// io.netty.channel.DefaultChannelPipeline.HeadContext#channelRead @Overridepublic void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {// head 節點沒有什么特別需要處理的,直接繼續調用 fireChannelRead() 即可 ctx.fireChannelRead(msg); }// io.netty.channel.AbstractChannelHandlerContext#fireChannelRead @Overridepublic ChannelHandlerContext fireChannelRead(final Object msg) {// 查找下一個入站處理器(查找方式前面已看過,就是以當前節點作為起點查找pipeline的下一個入站 channelHandlerContext, 調用即可// 此處調用與head節點的調用不同之處在于, head的調用是硬編碼的, 但此處則是動態的, 可遞歸的// 而真正的差別是在于 channelHandler 的實現不同,從而處理不同的業務 // 對于剛剛 accept 之后的數據,必然會經過 Acceptor, 如下 invokeChannelRead(findContextInbound(), msg);return this; } // 幾經周折, 最終轉到 ServerBootstrapAcceptor, 它會進行真正的數據處理, 實際上就是提交數據到 childGroup 中// io.netty.bootstrap.ServerBootstrap.ServerBootstrapAcceptor#channelRead @Override @SuppressWarnings("unchecked")public void channelRead(ChannelHandlerContext ctx, Object msg) {// 對外部的channel進行還原, 將業務的 childHandler 添加到 pipeline 中// 添加方式與之前的一樣,會涉及到name的生成,ChannelHandlerContext的構建。。。final Channel child = (Channel) msg;// 將業務設置的 childHandler 綁定到child pipeline 中, 即此時才會觸發 ChannelInitializer.initChannel()// 每次新的socket接入, 都會觸發一次 initChannel() 哦 child.pipeline().addLast(childHandler);// 復制各種配置屬性到 child 中 setChannelOptions(child, childOptions, logger);for (Entry<AttributeKey<?>, Object> e: childAttrs) { child.attr((AttributeKey<Object>) e.getKey()).set(e.getValue()); }try {// 注冊child, 以及添加一個 回調// register 時就會將當前channel與一個eventLoop線程綁定起來,后續所有的操作將會在這個eventloop線程上執行// 同時,它會將當前channel與 nio的selector 綁定注冊起來// 到此,acceptor的任務就算完成了childGroup.register(child).addListener(new ChannelFutureListener() { @Overridepublic void operationComplete(ChannelFuture future) throws Exception {if (!future.isSuccess()) { forceClose(child, future.cause()); } } }); } catch (Throwable t) { forceClose(child, t); } }
acceptor 最主要的工作就是將socket提交到 childGroup 中. 而childGroup的注冊過程, 與bossGroup的注冊過程是一致的, 它們的最大差異在于關注的事件不一致. acceptor 關注 OP_ACCEPT, 而childGroup 關注 OP_READ.
實際上,在bossGroup中, readComplete() 事件基本是會不被關注的, 但我們也可以通過它來了解下 readComplete 的傳播方式吧! 總體和 read() 事件的傳播是一致的.
// io.netty.channel.DefaultChannelPipeline#fireChannelReadComplete @Overridepublic final ChannelPipeline fireChannelReadComplete() {// 同樣以 head 作為起點開始傳播 AbstractChannelHandlerContext.invokeChannelReadComplete(head);return this; }// 通用的調用 handler 方式// io.netty.channel.AbstractChannelHandlerContext#invokeChannelReadComplete(io.netty.channel.AbstractChannelHandlerContext)static void invokeChannelReadComplete(final AbstractChannelHandlerContext next) { EventExecutor executor = next.executor();if (executor.inEventLoop()) { next.invokeChannelReadComplete(); } else { Runnable task = next.invokeChannelReadCompleteTask;if (task == null) { next.invokeChannelReadCompleteTask = task = new Runnable() { @Overridepublic void run() { next.invokeChannelReadComplete(); } }; } executor.execute(task); } }// 通用pipeline調用模型// io.netty.channel.AbstractChannelHandlerContext#invokeChannelReadComplete()private void invokeChannelReadComplete() {if (invokeHandler()) {try { ((ChannelInboundHandler) handler()).channelReadComplete(this); } catch (Throwable t) { notifyHandlerException(t); } } else { fireChannelReadComplete(); } }// io.netty.channel.DefaultChannelPipeline.HeadContext#channelReadComplete @Overridepublic void channelReadComplete(ChannelHandlerContext ctx) throws Exception { ctx.fireChannelReadComplete(); readIfIsAutoRead(); }// io.netty.channel.AbstractChannelHandlerContext#fireChannelReadComplete @Overridepublic ChannelHandlerContext fireChannelReadComplete() {// 通用的 fireXXX 事件傳播方式,如果想調用下一節點,則調用 fireXXX, 否則pipeline將會被終止// 以當前節點作為起點查找下一個入站處理器 handler// 在acceptor中,最終會轉到 ServerBootstrapAcceptor.readComplete()中 invokeChannelReadComplete(findContextInbound());return this; } // io.netty.channel.ChannelInboundHandlerAdapter#channelReadComplete/** * Calls {@link ChannelHandlerContext#fireChannelReadComplete()} to forward * to the next {@link ChannelInboundHandler} in the {@link ChannelPipeline}. * * Sub-classes may override this method to change behavior. */@Overridepublic void channelReadComplete(ChannelHandlerContext ctx) throws Exception {// 因為 ServerBootstrapAcceptor 并沒有重寫 channelReadComplete 方法,所以直接忽略該事件了// 而 tail 節點中的默認 onUnhandledInboundChannelReadComplete() 也是空處理 ctx.fireChannelReadComplete(); }
總結下 pipeline 的傳播方式:
1. 以 pipeline.fireChannelReadComplete() 等方式觸發事件傳播;
2. 調用 invokeChannelReadComplete, 傳入 head或者tail作為傳播的起點;
3. 判斷是否在 eventloop 中,如果是則直接調用 next.invokeChannelReadComplete();
4. 調用 handler.channelReadComplete(this) 觸發具體的事件;
5. 具體handler處理事務,如果想向下一節點傳播,則調用 ctx.fireChannelReadComplete(), 否則停止傳播;
以上是以 fireChannelReadComplete 來講解的pipeline過程,實際上也是幾乎所有的事件傳播的方式。
上一節講到的是acceptor接入了socket, 他會提交到childGroup中進行處理, 然后自己就返回了。那么 childGroup 又是如何處理事務的呢?
實際上,它與bossGroup是完全一樣的處理方式,差別在于它們各自的pipeline是不一樣的,線程數是不一樣的,從而實現處理不同業務。而它處理是的讀寫事件,而acceptor則是處理的OP_ACCEPT事件。它的OP_READ事件是在創建NioSocketChannel的時候注冊好的。我們先看看下:
// 在bossGroup處理Accept事件時,創建 NioSocketChannel// io.netty.channel.socket.nio.NioServerSocketChannel#doReadMessages @Overrideprotected int doReadMessages(List<Object> buf) throws Exception { SocketChannel ch = SocketUtils.accept(javaChannel());try {if (ch != null) { buf.add(new NioSocketChannel(this, ch));return 1; } } catch (Throwable t) { logger.warn("Failed to create a new channel from an accepted socket.", t);try { ch.close(); } catch (Throwable t2) { logger.warn("Failed to close a socket.", t2); } }return 0; }// io.netty.channel.socket.nio.NioSocketChannel#NioSocketChannel/** * Create a new instance * * @param parent the {@link Channel} which created this instance or {@code null} if it was created by the user * @param socket the {@link SocketChannel} which will be used */public NioSocketChannel(Channel parent, SocketChannel socket) {// 在父類中處理事件監聽super(parent, socket); config = new NioSocketChannelConfig(this, socket.socket()); }// io.netty.channel.nio.AbstractNioByteChannel#AbstractNioByteChannel/** * Create a new instance * * @param parent the parent {@link Channel} by which this instance was created. May be {@code null} * @param ch the underlying {@link SelectableChannel} on which it operates */protected AbstractNioByteChannel(Channel parent, SelectableChannel ch) {// 注冊 OP_READ 事件super(parent, ch, SelectionKey.OP_READ); }
ok, 說回childGroup處理事件流中。因大家都是 NioEventLoopGroup, 所以創建的eventloop自然都是一樣的。即都會處理io事件和task運行。回顧下上節的processSelectedKey()操作:
// io.netty.channel.nio.NioEventLoop#processSelectedKey(java.nio.channels.SelectionKey, io.netty.channel.nio.AbstractNioChannel)private void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {final AbstractNioChannel.NioUnsafe unsafe = ch.unsafe();if (!k.isValid()) {final EventLoop eventLoop;try { eventLoop = ch.eventLoop(); } catch (Throwable ignored) {// If the channel implementation throws an exception because there is no event loop, we ignore this// because we are only trying to determine if ch is registered to this event loop and thus has authority// to close ch.return; }// Only close ch if ch is still registered to this EventLoop. ch could have deregistered from the event loop// and thus the SelectionKey could be cancelled as part of the deregistration process, but the channel is// still healthy and should not be closed.// See https://github.com/netty/netty/issues/5125if (eventLoop != this || eventLoop == null) {return; }// close the channel if the key is not valid anymore unsafe.close(unsafe.voidPromise());return; }try {int readyOps = k.readyOps();// We first need to call finishConnect() before try to trigger a read(...) or write(...) as otherwise// the NIO JDK channel implementation may throw a NotYetConnectedException.if ((readyOps & SelectionKey.OP_CONNECT) != 0) {// remove OP_CONNECT as otherwise Selector.select(..) will always return without blocking// See https://github.com/netty/netty/issues/924int ops = k.interestOps(); ops &= ~SelectionKey.OP_CONNECT; k.interestOps(ops); unsafe.finishConnect(); }// Process OP_WRITE first as we may be able to write some queued buffers and so free memory.if ((readyOps & SelectionKey.OP_WRITE) != 0) {// Call forceFlush which will also take care of clear the OP_WRITE once there is nothing left to write ch.unsafe().forceFlush(); }// Also check for readOps of 0 to workaround possible JDK bug which may otherwise lead// to a spin loopif ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {// 走不一樣的 unsafe 實現 unsafe.read(); } } catch (CancelledKeyException ignored) { unsafe.close(unsafe.voidPromise()); } }// io.netty.channel.nio.AbstractNioByteChannel.NioByteUnsafe#read @Overridepublic final void read() {final ChannelConfig config = config();// 判斷是否終止讀數據,比如socket關閉等原因if (shouldBreakReadReady(config)) { clearReadPending();return; }// step1. 環境準備,pipeline, allocator...// 這里的 pipeline 就是我們自定義傳入的各種handler了final ChannelPipeline pipeline = pipeline();final ByteBufAllocator allocator = config.getAllocator();final RecvByteBufAllocator.Handle allocHandle = recvBufAllocHandle(); allocHandle.reset(config); ByteBuf byteBuf = null;boolean close = false;try {do {// 每次循環讀取數據時,都進行重新內存分配,默認分配 1024的byte內存byteBuf = allocHandle.allocate(allocator);// step2. 將數據讀取放入 byteBuf 中, 并由 allocHandle 記錄讀取的數據 allocHandle.lastBytesRead(doReadBytes(byteBuf));// 當數據讀取完成或者進行close時,會讀取 -1if (allocHandle.lastBytesRead() <= 0) {// nothing was read. release the buffer. byteBuf.release(); byteBuf = null; close = allocHandle.lastBytesRead() < 0;if (close) {// There is nothing left to read as we received an EOF.readPending = false; }break; }// 讀取數據記錄次數 +1allocHandle.incMessagesRead(1); readPending = false;// step3. 觸發pipeline 的channelRead() 事件 pipeline.fireChannelRead(byteBuf); byteBuf = null; } while (allocHandle.continueReading()); allocHandle.readComplete();// 觸發 channelReadComplete 事件,傳播 pipeline.fireChannelReadComplete();if (close) { closeOnRead(pipeline); } } catch (Throwable t) { handleReadException(pipeline, byteBuf, t, close, allocHandle); } finally {// Check if there is a readPending which was not processed yet.// This could be for two reasons:// * The user called Channel.read() or ChannelHandlerContext.read() in channelRead(...) method// * The user called Channel.read() or ChannelHandlerContext.read() in channelReadComplete(...) method//// See https://github.com/netty/netty/issues/2254if (!readPending && !config.isAutoRead()) { removeReadOp(); } } } }
以上,就是 childGroup 處理 io 事件的基本過程了。總體和acceptor的差不多,這也是netty抽象得比較合理的地方,所有地方都可以套用同一個模式。
1. 準備環境,獲取pipeline,配置config分配內存;
2. doReadBytes() 讀取數據buffer, 最大讀取1024字節;
3. 讀取完成后記錄并觸發pipeline下游處理本次的channelRead()事件,保證各handler都有機會處理該部分數據;
4. 只要數據沒讀取完,且沒有超過最大數據量限制,循環處理2/3步驟;
5. 總體觸發一次 channelReadComplete 事件,并同理在pipeline中傳播;
6. 異常處理,close處理;
pipeline 的傳播方式, 前面我們已經見識過了,范式就是:read() 作為入站事件, 從head開始傳播,依次調用各handler的channelRead()方法,直到鏈尾。
接下來我們就其中幾個關鍵的步驟看下,netty都是如何實現的。
// 想想應該都能知道,就是從socket中將buffer讀取存入到 byteBuf 中// io.netty.channel.socket.nio.NioSocketChannel#doReadBytes @Overrideprotected int doReadBytes(ByteBuf byteBuf) throws Exception {final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle(); allocHandle.attemptedBytesRead(byteBuf.writableBytes());// 獲取 SocketChannel, 然后讀取其中的數據, 寫入 byteBuf 中,也是一個從內核到heap的一個拷貝過程return byteBuf.writeBytes(javaChannel(), allocHandle.attemptedBytesRead()); }// io.netty.buffer.AbstractByteBuf#writeBytes @Overridepublic int writeBytes(ScatteringByteChannel in, int length) throws IOException { ensureWritable(length);int writtenBytes = setBytes(writerIndex, in, length);// 保證寫指針的同步if (writtenBytes > 0) { writerIndex += writtenBytes; }return writtenBytes; }// io.netty.buffer.PooledUnsafeDirectByteBuf#setBytes @Overridepublic int setBytes(int index, ScatteringByteChannel in, int length) throws IOException { checkIndex(index, length);// 獲取 ByteBuf 的共享變量,設值后 ByteBuf 可共享到// DirectByteBuffer 就體現在這里ByteBuffer tmpBuf = internalNioBuffer(); index = idx(index); tmpBuf.clear().position(index).limit(index + length);try {// 從 socketChannel 中讀取數據到 tmpBuf 中,// 此處看起來是存在內存拷貝,但實際上被使用直接內存時,并不會新建,而直接共用內核中內存數據即可return in.read(tmpBuf); } catch (ClosedChannelException ignored) {return -1; } }
以上就是socket數據的讀取過程了,總體可以描述為內核內存到java堆內存的拷貝過程(當然具體實現方式是另一回事)。
數據讀取完成后(可能是部分),就會交pipeline處理這部分數據,head -> handler... -> tail 的過程。我們還是一個具體的 netty提供的一個解碼的實現:
就是一個 channelRead 處理過程 。
// io.netty.handler.codec.ByteToMessageDecoder#channelRead @Overridepublic void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {if (msg instanceof ByteBuf) { CodecOutputList out = CodecOutputList.newInstance();try { ByteBuf data = (ByteBuf) msg; first = cumulation == null;// 如果是第一次進來,則直接賦值data, 后續則附加到 cumulation 中,以達到連接字節的作用// 一般每個連接進來之后,會創建一個 Decoder, 后續處理數據就會都會存在連接總是,但總體來說都是線程安全的if (first) { cumulation = data; } else { cumulation = cumulator.cumulate(ctx.alloc(), cumulation, data); }// 調用decode方法,將byte轉換為string callDecode(ctx, cumulation, out); } catch (DecoderException e) {throw e; } catch (Exception e) {throw new DecoderException(e); } finally {if (cumulation != null && !cumulation.isReadable()) { numReads = 0;// 釋放buffer cumulation.release(); cumulation = null; } else if (++ numReads >= discardAfterReads) {// We did enough reads already try to discard some bytes so we not risk to see a OOME.// See https://github.com/netty/netty/issues/4275numReads = 0; discardSomeReadBytes(); }int size = out.size(); decodeWasNull = !out.insertSinceRecycled();// 通知下游數據到來,依次遍歷out的數據調用下游 fireChannelRead(ctx, out, size); out.recycle(); } } else { ctx.fireChannelRead(msg); } }// io.netty.handler.codec.ByteToMessageDecoder#callDecode/** * Called once data should be decoded from the given {@link ByteBuf}. This method will call * {@link #decode(ChannelHandlerContext, ByteBuf, List)} as long as decoding should take place. * * @param ctx the {@link ChannelHandlerContext} which this {@link ByteToMessageDecoder} belongs to * @param in the {@link ByteBuf} from which to read data * @param out the {@link List} to which decoded messages should be added */protected void callDecode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) {try {while (in.isReadable()) {int outSize = out.size();// 處理遺留數據if (outSize > 0) {// out中有數據,則重新觸發 channelRead() 以使下游可感知該數據 fireChannelRead(ctx, out, outSize); out.clear();// Check if this handler was removed before continuing with decoding.// If it was removed, it is not safe to continue to operate on the buffer.//// See:// - https://github.com/netty/netty/issues/4635if (ctx.isRemoved()) {break; } outSize = 0; }int oldInputLength = in.readableBytes();// 調用解碼方法,對對in數據進行處理,并必要情況下輸出結果到 out 中 decodeRemovalReentryProtection(ctx, in, out);// Check if this handler was removed before continuing the loop.// If it was removed, it is not safe to continue to operate on the buffer.//// See https://github.com/netty/netty/issues/1664if (ctx.isRemoved()) {break; }// 沒有讀取到數據,或者未滿足輸出數據的要求(如讀取到半包),前后的 out 大小相等if (outSize == out.size()) {if (oldInputLength == in.readableBytes()) {break; } else {continue; } }// 讀取完成后, readableBytes() 一般會變為0if (oldInputLength == in.readableBytes()) {throw new DecoderException( StringUtil.simpleClassName(getClass()) + ".decode() did not read anything but decoded a message."); }if (isSingleDecode()) {break; } } } catch (DecoderException e) {throw e; } catch (Exception cause) {throw new DecoderException(cause); } }// io.netty.handler.codec.ByteToMessageDecoder#decodeRemovalReentryProtection/** * Decode the from one {@link ByteBuf} to an other. This method will be called till either the input * {@link ByteBuf} has nothing to read when return from this method or till nothing was read from the input * {@link ByteBuf}. * * @param ctx the {@link ChannelHandlerContext} which this {@link ByteToMessageDecoder} belongs to * @param in the {@link ByteBuf} from which to read data * @param out the {@link List} to which decoded messages should be added * @throws Exception is thrown if an error occurs */final void decodeRemovalReentryProtection(ChannelHandlerContext ctx, ByteBuf in, List<Object> out)throws Exception { decodeState = STATE_CALLING_CHILD_DECODE;try {// 將byte數據轉換為想要的類型,即我們自定義處理的地方 decode(ctx, in, out); } finally {boolean removePending = decodeState == STATE_HANDLER_REMOVED_PENDING; decodeState = STATE_INIT;if (removePending) { handlerRemoved(ctx); } } }// 比如如下實現,將byte轉換為stringpublic class MessageDecoder extends ByteToMessageDecoder {//從ByteBuf中獲取字節,轉換成對象,寫入到List中 @Overrideprotected void decode(ChannelHandlerContext ctx, ByteBuf buffer, List<Object> out) throws Exception { buffer.markReaderIndex();byte[] data=new byte[buffer.readableBytes()]; buffer.readBytes(data); out.add(new String(data,"UTF-8")); } } // 觸發pipeline下游handler處理數據// io.netty.handler.codec.ByteToMessageDecoder#fireChannelRead/** * Get {@code numElements} out of the {@link CodecOutputList} and forward these through the pipeline. */static void fireChannelRead(ChannelHandlerContext ctx, CodecOutputList msgs, int numElements) {for (int i = 0; i < numElements; i ++) { ctx.fireChannelRead(msgs.getUnsafe(i)); } }
總結下對數據的解碼過程:
1. 接收外部讀取的byteBuf;
2. 判斷數據是否足夠進行解碼,如果解碼成功將其添加到out中;
3. 將out的數據傳入到pipeline下游,進行業務處理;
4. 釋放已讀取的buffer數據,進入下一次數據讀取準備;
對于短連接請求,每次都會有新的encoder, decoder, 但對于長連接而言, 則會復用之前的handler, 從而也需要處理好各數據的分界問題,即自定義協議時得夠嚴謹以避免誤讀。
write 數據是向對端進行數據輸出的過程,一般有 write, 和 flush 過程, write 僅向應用緩沖中寫入數據,在合適的時候flush到對端。而writeAndFlush則表示立即輸出數據到對端。有 DefaultChannelHandlerContext 的實現:
// io.netty.channel.AbstractChannelHandlerContext#writeAndFlush @Overridepublic ChannelFuture writeAndFlush(Object msg) {return writeAndFlush(msg, newPromise()); }// io.netty.channel.AbstractChannelHandlerContext#newPromise @Overridepublic ChannelPromise newPromise() {// channel 會從pipeline中獲取, executor 即channel中綁定的io線程return new DefaultChannelPromise(channel(), executor()); }// io.netty.channel.AbstractChannelHandlerContext#writeAndFlush @Overridepublic ChannelFuture writeAndFlush(Object msg, ChannelPromise promise) {if (msg == null) {throw new NullPointerException("msg"); }// channel 等信息校驗if (isNotValidPromise(promise, true)) { ReferenceCountUtil.release(msg);// cancelledreturn promise; }// 寫數據, flush=truewrite(msg, true, promise);return promise; }private void write(Object msg, boolean flush, ChannelPromise promise) {// write 為出站事件, 從當前節點查找 出站handler, 直到headAbstractChannelHandlerContext next = findContextOutbound();final Object m = pipeline.touch(msg, next); EventExecutor executor = next.executor();if (executor.inEventLoop()) {if (flush) {// 下一節點處理 next.invokeWriteAndFlush(m, promise); } else { next.invokeWrite(m, promise); } } else { AbstractWriteTask task;if (flush) { task = WriteAndFlushTask.newInstance(next, m, promise); } else { task = WriteTask.newInstance(next, m, promise); } safeExecute(executor, task, promise, m); } }// io.netty.channel.AbstractChannelHandlerContext#invokeWriteAndFlushprivate void invokeWriteAndFlush(Object msg, ChannelPromise promise) {if (invokeHandler()) {// step1. write 事件寫數據到緩沖區 invokeWrite0(msg, promise);// step2. flush 事件寫緩沖區數據到對端 invokeFlush0(); } else { writeAndFlush(msg, promise); } }
write 含義明確,寫數據到xxx。那這是如何實現的呢?(僅從應用層分析,咱們就不討論底層TCP協議了)
實際上,它就是write事件的傳播過程,最終由 head 節點處理。
private void invokeWrite0(Object msg, ChannelPromise promise) {try {// write 傳遞((ChannelOutboundHandler) handler()).write(this, msg, promise); } catch (Throwable t) { notifyOutboundHandlerException(t, promise); } } // 此處由 encoder 進行處理// io.netty.handler.codec.MessageToByteEncoder#write @Overridepublic void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception { ByteBuf buf = null;try {if (acceptOutboundMessage(msg)) { @SuppressWarnings("unchecked") I cast = (I) msg;// 分配byteBuf, 處理輸出,和讀取一樣,可以使用 DirectByteBufferbuf = allocateBuffer(ctx, cast, preferDirect);try {// 調用業務實現的 encode 方法,寫數據到 buf 中 encode(ctx, cast, buf); } finally { ReferenceCountUtil.release(cast); }if (buf.isReadable()) {// 如果被寫入數據到 buf 中,則傳播write事件// 直到head 完成 ctx.write(buf, promise); } else { buf.release(); ctx.write(Unpooled.EMPTY_BUFFER, promise); } buf = null; } else { ctx.write(msg, promise); } } catch (EncoderException e) {throw e; } catch (Throwable e) {throw new EncoderException(e); } finally {if (buf != null) { buf.release(); } } } @Overridepublic ByteBuf ioBuffer() {if (PlatformDependent.hasUnsafe()) {return directBuffer(DEFAULT_INITIAL_CAPACITY); }return heapBuffer(DEFAULT_INITIAL_CAPACITY); }// head 節點會處理具體的寫入細節 @Overridepublic void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception { unsafe.write(msg, promise); }// io.netty.channel.AbstractChannel.AbstractUnsafe#write @Overridepublic final void write(Object msg, ChannelPromise promise) { assertEventLoop(); ChannelOutboundBuffer outboundBuffer = this.outboundBuffer;if (outboundBuffer == null) {// If the outboundBuffer is null we know the channel was closed and so// need to fail the future right away. If it is not null the handling of the rest// will be done in flush0()// See https://github.com/netty/netty/issues/2362 safeSetFailure(promise, WRITE_CLOSED_CHANNEL_EXCEPTION);// release message now to prevent resource-leak ReferenceCountUtil.release(msg);return; }int size;try {// 處理為 DirectByteBuffermsg = filterOutboundMessage(msg); size = pipeline.estimatorHandle().size(msg);if (size < 0) { size = 0; } } catch (Throwable t) { safeSetFailure(promise, t); ReferenceCountUtil.release(msg);return; }// 添加數據到 outboundBuffer 中,即輸出緩沖區 outboundBuffer.addMessage(msg, size, promise); }// io.netty.channel.nio.AbstractNioByteChannel#filterOutboundMessage @Overrideprotected final Object filterOutboundMessage(Object msg) {if (msg instanceof ByteBuf) { ByteBuf buf = (ByteBuf) msg;if (buf.isDirect()) {return msg; }return newDirectBuffer(buf); }if (msg instanceof FileRegion) {return msg; }throw new UnsupportedOperationException("unsupported message type: " + StringUtil.simpleClassName(msg) + EXPECTED_TYPES); }// io.netty.channel.ChannelOutboundBuffer#addMessage/** * Add given message to this {@link ChannelOutboundBuffer}. The given {@link ChannelPromise} will be notified once * the message was written. */public void addMessage(Object msg, int size, ChannelPromise promise) { Entry entry = Entry.newInstance(msg, size, total(msg), promise);if (tailEntry == null) { flushedEntry = null; } else { Entry tail = tailEntry; tail.next = entry; } tailEntry = entry;if (unflushedEntry == null) { unflushedEntry = entry; }// increment pending bytes after adding message to the unflushed arrays.// See https://github.com/netty/netty/issues/1619incrementPendingOutboundBytes(entry.pendingSize, false); }private void incrementPendingOutboundBytes(long size, boolean invokeLater) {if (size == 0) {return; }long newWriteBufferSize = TOTAL_PENDING_SIZE_UPDATER.addAndGet(this, size);if (newWriteBufferSize > channel.config().getWriteBufferHighWaterMark()) {// 超出一定數量后,需要主動flush setUnwritable(invokeLater); } }private void setUnwritable(boolean invokeLater) {for (;;) {final int oldValue = unwritable;final int newValue = oldValue | 1;if (UNWRITABLE_UPDATER.compareAndSet(this, oldValue, newValue)) {if (oldValue == 0 && newValue != 0) { fireChannelWritabilityChanged(invokeLater); }break; } } }
即write只向 outboundBuffer中寫入數據,應該是比較快速的。但它也是經歷了 pipeline 的事件流的層層處理,如果想在這其中做點什么,也是比較方便的。
上面一步寫入數據到 outboundBuffer 中,并未向對端響應數據,需要進行 flush 對端才能感知到。
private void invokeWriteAndFlush(Object msg, ChannelPromise promise) {if (invokeHandler()) { invokeWrite0(msg, promise); invokeFlush0(); } else { writeAndFlush(msg, promise); } }// io.netty.channel.AbstractChannelHandlerContext#invokeFlush0private void invokeFlush0() {try {// 由 MessageEncoder 處理((ChannelOutboundHandler) handler()).flush(this); } catch (Throwable t) { notifyHandlerException(t); } }// io.netty.channel.ChannelOutboundHandlerAdapter#flush/** * Calls {@link ChannelHandlerContext#flush()} to forward * to the next {@link ChannelOutboundHandler} in the {@link ChannelPipeline}. * * Sub-classes may override this method to change behavior. */@Overridepublic void flush(ChannelHandlerContext ctx) throws Exception { ctx.flush(); }// io.netty.channel.AbstractChannelHandlerContext#flush @Overridepublic ChannelHandlerContext flush() {// 出站handler, 依次調用, 直到headfinal AbstractChannelHandlerContext next = findContextOutbound(); EventExecutor executor = next.executor();if (executor.inEventLoop()) { next.invokeFlush(); } else { Runnable task = next.invokeFlushTask;if (task == null) { next.invokeFlushTask = task = new Runnable() { @Overridepublic void run() { next.invokeFlush(); } }; } safeExecute(executor, task, channel().voidPromise(), null); }return this; }private void invokeFlush() {if (invokeHandler()) {// 遍歷 pipeline invokeFlush0(); } else { flush(); } }// head 節點負責最終的數據flush// io.netty.channel.DefaultChannelPipeline.HeadContext#flush @Overridepublic void flush(ChannelHandlerContext ctx) throws Exception {// unsafe 為 NioSocketChannel$NioSocketChannelUnsafe unsafe.flush(); }// io.netty.channel.AbstractChannel.AbstractUnsafe#flush @Overridepublic final void flush() { assertEventLoop(); ChannelOutboundBuffer outboundBuffer = this.outboundBuffer;if (outboundBuffer == null) {return; } outboundBuffer.addFlush(); flush0(); }// io.netty.channel.ChannelOutboundBuffer#addFlush/** * Add a flush to this {@link ChannelOutboundBuffer}. This means all previous added messages are marked as flushed * and so you will be able to handle them. */public void addFlush() {// There is no need to process all entries if there was already a flush before and no new messages// where added in the meantime.//// See https://github.com/netty/netty/issues/2577// 使用 unflushedEntry 保存要被 flush 的數據Entry entry = unflushedEntry;if (entry != null) {if (flushedEntry == null) {// there is no flushedEntry yet, so start with the entryflushedEntry = entry; }do { flushed ++;if (!entry.promise.setUncancellable()) {// Was cancelled so make sure we free up memory and notify about the freed bytesint pending = entry.cancel(); decrementPendingOutboundBytes(pending, false, true); } entry = entry.next; } while (entry != null);// All flushed so reset unflushedEntryunflushedEntry = null; } }// io.netty.channel.nio.AbstractNioChannel.AbstractNioUnsafe#flush0 @Overrideprotected final void flush0() {// Flush immediately only when there's no pending flush.// If there's a pending flush operation, event loop will call forceFlush() later,// and thus there's no need to call it now.// 第一次進入此處,將會嘗試立即向socket中寫入數據或者立即注冊一個 OP_WRITE 事件,以觸發寫if (!isFlushPending()) {super.flush0(); } }private boolean isFlushPending() { SelectionKey selectionKey = selectionKey();return selectionKey.isValid() && (selectionKey.interestOps() & SelectionKey.OP_WRITE) != 0; }// io.netty.channel.AbstractChannel.AbstractUnsafe#flush0@SuppressWarnings("deprecation")protected void flush0() {if (inFlush0) {// Avoid re-entrancereturn; }final ChannelOutboundBuffer outboundBuffer = this.outboundBuffer;if (outboundBuffer == null || outboundBuffer.isEmpty()) {return; } inFlush0 = true;// Mark all pending write requests as failure if the channel is inactive.if (!isActive()) {try {if (isOpen()) { outboundBuffer.failFlushed(FLUSH0_NOT_YET_CONNECTED_EXCEPTION, true); } else {// Do not trigger channelWritabilityChanged because the channel is closed already.outboundBuffer.failFlushed(FLUSH0_CLOSED_CHANNEL_EXCEPTION, false); } } finally { inFlush0 = false; }return; }try { doWrite(outboundBuffer); } catch (Throwable t) {if (t instanceof IOException && config().isAutoClose()) {/** * Just call {@link #close(ChannelPromise, Throwable, boolean)} here which will take care of * failing all flushed messages and also ensure the actual close of the underlying transport * will happen before the promises are notified. * * This is needed as otherwise {@link #isActive()} , {@link #isOpen()} and {@link #isWritable()} * may still return {@code true} even if the channel should be closed as result of the exception. */close(voidPromise(), t, FLUSH0_CLOSED_CHANNEL_EXCEPTION, false); } else {try { shutdownOutput(voidPromise(), t); } catch (Throwable t2) { close(voidPromise(), t2, FLUSH0_CLOSED_CHANNEL_EXCEPTION, false); } } } finally { inFlush0 = false; } }// io.netty.channel.socket.nio.NioSocketChannel#doWrite @Overrideprotected void doWrite(ChannelOutboundBuffer in) throws Exception { SocketChannel ch = javaChannel();int writeSpinCount = config().getWriteSpinCount();do {if (in.isEmpty()) {// All written so clear OP_WRITE clearOpWrite();// Directly return here so incompleteWrite(...) is not called.return; }// Ensure the pending writes are made of ByteBufs only.int maxBytesPerGatheringWrite = ((NioSocketChannelConfig) config).getMaxBytesPerGatheringWrite(); ByteBuffer[] nioBuffers = in.nioBuffers(1024, maxBytesPerGatheringWrite);int nioBufferCnt = in.nioBufferCount();// Always us nioBuffers() to workaround data-corruption.// See https://github.com/netty/netty/issues/2761switch (nioBufferCnt) {case 0:// We have something else beside ByteBuffers to write so fallback to normal writes.writeSpinCount -= doWrite0(in);break;case 1: {// Only one ByteBuf so use non-gathering write// Zero length buffers are not added to nioBuffers by ChannelOutboundBuffer, so there is no need// to check if the total size of all the buffers is non-zero.ByteBuffer buffer = nioBuffers[0];int attemptedBytes = buffer.remaining();// 向socket中寫入數據,完事,寫入多少數據量返回,以便判定是否寫完final int localWrittenBytes = ch.write(buffer);if (localWrittenBytes <= 0) { incompleteWrite(true);return; } adjustMaxBytesPerGatheringWrite(attemptedBytes, localWrittenBytes, maxBytesPerGatheringWrite); in.removeBytes(localWrittenBytes);// 減少可寫次數,超過最大可寫次數,退出--writeSpinCount;break; }default: {// Zero length buffers are not added to nioBuffers by ChannelOutboundBuffer, so there is no need// to check if the total size of all the buffers is non-zero.// We limit the max amount to int above so cast is safelong attemptedBytes = in.nioBufferSize();final long localWrittenBytes = ch.write(nioBuffers, 0, nioBufferCnt);if (localWrittenBytes <= 0) { incompleteWrite(true);return; }// Casting to int is safe because we limit the total amount of data in the nioBuffers to int above.adjustMaxBytesPerGatheringWrite((int) attemptedBytes, (int) localWrittenBytes, maxBytesPerGatheringWrite); in.removeBytes(localWrittenBytes);--writeSpinCount;break; } } } while (writeSpinCount > 0);// 數據未寫完,注冊 OP_WRITE 事件incompleteWrite(writeSpinCount < 0); }protected final void clearOpWrite() {final SelectionKey key = selectionKey();// Check first if the key is still valid as it may be canceled as part of the deregistration// from the EventLoop// See https://github.com/netty/netty/issues/2104if (!key.isValid()) {return; }final int interestOps = key.interestOps();// 取消寫事件監聽if ((interestOps & SelectionKey.OP_WRITE) != 0) { key.interestOps(interestOps & ~SelectionKey.OP_WRITE); } }// 獲取 nioBufers ----------------------------------------------------/** * Returns an array of direct NIO buffers if the currently pending messages are made of {@link ByteBuf} only. * {@link #nioBufferCount()} and {@link #nioBufferSize()} will return the number of NIO buffers in the returned * array and the total number of readable bytes of the NIO buffers respectively. * <p> * Note that the returned array is reused and thus should not escape * {@link AbstractChannel#doWrite(ChannelOutboundBuffer)}. * Refer to {@link NioSocketChannel#doWrite(ChannelOutboundBuffer)} for an example. * </p> * @param maxCount The maximum amount of buffers that will be added to the return value. * @param maxBytes A hint toward the maximum number of bytes to include as part of the return value. Note that this * value maybe exceeded because we make a best effort to include at least 1 {@link ByteBuffer} * in the return value to ensure write progress is made. */public ByteBuffer[] nioBuffers(int maxCount, long maxBytes) {assert maxCount > 0;assert maxBytes > 0;long nioBufferSize = 0;int nioBufferCount = 0;final InternalThreadLocalMap threadLocalMap = InternalThreadLocalMap.get(); ByteBuffer[] nioBuffers = NIO_BUFFERS.get(threadLocalMap); Entry entry = flushedEntry;while (isFlushedEntry(entry) && entry.msg instanceof ByteBuf) {if (!entry.cancelled) { ByteBuf buf = (ByteBuf) entry.msg;final int readerIndex = buf.readerIndex();final int readableBytes = buf.writerIndex() - readerIndex;if (readableBytes > 0) {if (maxBytes - readableBytes < nioBufferSize && nioBufferCount != 0) {// If the nioBufferSize + readableBytes will overflow maxBytes, and there is at least one entry// we stop populate the ByteBuffer array. This is done for 2 reasons:// 1. bsd/osx don't allow to write more bytes then Integer.MAX_VALUE with one writev(...) call// and so will return 'EINVAL', which will raise an IOException. On Linux it may work depending// on the architecture and kernel but to be safe we also enforce the limit here.// 2. There is no sense in putting more data in the array than is likely to be accepted by the// OS.//// See also:// - https://www.freebsd.org/cgi/man.cgi?query=write&sektion=2// - http://linux.die.net/man/2/writevbreak; } nioBufferSize += readableBytes;int count = entry.count;if (count == -1) {//noinspection ConstantValueVariableUseentry.count = count = buf.nioBufferCount(); }int neededSpace = min(maxCount, nioBufferCount + count);if (neededSpace > nioBuffers.length) { nioBuffers = expandNioBufferArray(nioBuffers, neededSpace, nioBufferCount); NIO_BUFFERS.set(threadLocalMap, nioBuffers); }if (count == 1) { ByteBuffer nioBuf = entry.buf;if (nioBuf == null) {// cache ByteBuffer as it may need to create a new ByteBuffer instance if its a// derived bufferentry.buf = nioBuf = buf.internalNioBuffer(readerIndex, readableBytes); } nioBuffers[nioBufferCount++] = nioBuf; } else { ByteBuffer[] nioBufs = entry.bufs;if (nioBufs == null) {// cached ByteBuffers as they may be expensive to create in terms// of Object allocationentry.bufs = nioBufs = buf.nioBuffers(); }for (int i = 0; i < nioBufs.length && nioBufferCount < maxCount; ++i) { ByteBuffer nioBuf = nioBufs[i];if (nioBuf == null) {break; } else if (!nioBuf.hasRemaining()) {continue; } nioBuffers[nioBufferCount++] = nioBuf; } }if (nioBufferCount == maxCount) {break; } } } entry = entry.next; }this.nioBufferCount = nioBufferCount;this.nioBufferSize = nioBufferSize;return nioBuffers; }// 未寫完數據的處理: 注冊OP_WRITE事件讓后續eventloop處理// io.netty.channel.nio.AbstractNioByteChannel#incompleteWriteprotected final void incompleteWrite(boolean setOpWrite) {// Did not write completely.if (setOpWrite) { setOpWrite(); } else {// It is possible that we have set the write OP, woken up by NIO because the socket is writable, and then// use our write quantum. In this case we no longer want to set the write OP because the socket is still// writable (as far as we know). We will find out next time we attempt to write if the socket is writable// and set the write OP if necessary. clearOpWrite();// Schedule flush again later so other tasks can be picked up in the meantime eventLoop().execute(flushTask); } }// io.netty.channel.nio.AbstractNioByteChannel#setOpWriteprotected final void setOpWrite() {final SelectionKey key = selectionKey();// Check first if the key is still valid as it may be canceled as part of the deregistration// from the EventLoop// See https://github.com/netty/netty/issues/2104if (!key.isValid()) {return; }final int interestOps = key.interestOps();// 如果數據未被寫完整,則主動注冊寫事件監聽,讓 eventloop 去處理if ((interestOps & SelectionKey.OP_WRITE) == 0) { key.interestOps(interestOps | SelectionKey.OP_WRITE); } }
到此,相信大家對“io請求處理過程是什么”有了更深的了解,不妨來實際操作一番吧!這里是億速云網站,更多相關內容可以進入相關頻道進行查詢,關注我們,繼續學習!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。