您好,登錄后才能下訂單哦!
本篇內容主要講解“Apache Pulsar中TopicLookup請求處理的邏輯是什么”,感興趣的朋友不妨來看看。本文介紹的方法操作簡單快捷,實用性強。下面就讓小編來帶大家學習“Apache Pulsar中TopicLookup請求處理的邏輯是什么”吧!
實際的核心邏輯是這2行代碼
LookupOptions options = LookupOptions.builder() .authoritative(authoritative) .advertisedListenerName(advertisedListenerName) .loadTopicsInBundle(true) // 這里這個條件是true .build(); pulsarService.getNamespaceService().getBrokerServiceUrlAsync(topicName, options)
這里傳遞的參數將loadTopicsInBundle
設置了成true。我們看下在處理lookup請求過程中是否有loadtopic的邏輯。
這個函數我們注意到有 ownershipCache.getOwnerAsync
和searchForCandidateBroker
這2個地方沒有細說
我們先看一下ownershipCache
。
private CompletableFuture<Optional<LookupResult>> findBrokerServiceUrl( NamespaceBundle bundle, LookupOptions options) { .... return targetMap.computeIfAbsent(bundle, (k) -> { ... ownershipCache.getOwnerAsync(bundle) .thenAccept(nsData -> { // nsData : Optional<NamespaceEphemeralData> if (!nsData.isPresent()) { ... // 目前還沒有人負責這個bundle 嘗試查找這個bundle的owner pulsar.getExecutor().execute(() -> { searchForCandidateBroker(bundle, future, options); }); ... } ... }); }
從javadoc 里面可以知道這個類的主要功能。
cache zk里面關于 service unit 的ownership信息
提供zk的讀寫功能
可以用來查找owner信息
可以用來獲取一個 service unit 的ownership
getOwnerAsync 這個方法主要是查看zk cache里面是否有信息,如果沒有信息,則嘗試讀取zk節點,
如果節點有信息則說明有人拿到了這個bundle的ownership
如果這個節點就是當前機器,則會通知bundle load的信息給listener
如果這個節點沒有信息,說明當前還沒有人負責這個bundle。
// org.apache.pulsar.broker.namespace.OwnerShipCache public CompletableFuture<Optional<NamespaceEphemeralData>> getOwnerAsync(NamespaceBundle suName) { // 這里的路徑是 /namespace/{namespace}/0x{lowerEndpoint}_0x{upperEndpoint} String path = ServiceUnitZkUtils.path(suName); // ownedBundleFuture 還是一個 AsyncLoadingCache // 這里不會嘗試去加載這個cache信息,因為調用的getIfPresent CompletableFuture<OwnedBundle> ownedBundleFuture = ownedBundlesCache.getIfPresent(path); // 如果之前有內容的話就說明當前broker是owner(這部分邏輯在cache的加載代碼里面,后面會說) if (ownedBundleFuture != null) { // Either we're the owners or we're trying to become the owner. return ownedBundleFuture.thenApply(serviceUnit -> { // We are the owner of the service unit return Optional.of(serviceUnit.isActive() ? selfOwnerInfo : selfOwnerInfoDisabled); }); } // 如果cache里面沒有,我們確認下當前的owner是誰。 // If we're not the owner, we need to check if anybody else is return resolveOwnership(path) .thenApply(optional -> optional.map(Map.Entry::getKey)); } private CompletableFuture<Optional<Map.Entry<NamespaceEphemeralData, Stat>>> resolveOwnership(String path) { return ownershipReadOnlyCache.getWithStatAsync(path) // 這個邏輯是從zk里面讀取這個bundle路徑下的內容 .thenApply(optionalOwnerDataWithStat -> { // 如果這個路徑下有數據,則說明有人已經成功獲取了這個bundle的ownership信息 if (optionalOwnerDataWithStat.isPresent()) { Map.Entry<NamespaceEphemeralData, Stat> ownerDataWithStat = optionalOwnerDataWithStat.get(); Stat stat = ownerDataWithStat.getValue(); // 如果這個zk臨時節點的owner就是當前的broker if (stat.getEphemeralOwner() == localZkCache.getZooKeeper().getSessionId()) { LOG.info("Successfully reestablish ownership of {}", path); // 這里是更新緩存的邏輯 OwnedBundle ownedBundle = new OwnedBundle(ServiceUnitZkUtils.suBundleFromPath(path, bundleFactory)); if (selfOwnerInfo.getNativeUrl().equals(ownerDataWithStat.getKey().getNativeUrl())) { ownedBundlesCache.put(path, CompletableFuture.completedFuture(ownedBundle)); } ownershipReadOnlyCache.invalidate(path); // 這里會通知callback(和主要邏輯無關) namespaceService.onNamespaceBundleOwned(ownedBundle.getNamespaceBundle()); } } // 這里返回的是一個Optional對象,如果這個節點不存在的話返回的實際是一個Empty // 說明這個時候沒有人負責這個bundle // 也可能返回帶有信息的optional,這時候負責這個節點的broker可能是當前機器也可能是其他機器。 return optionalOwnerDataWithStat; }); }
我們看一下如果沒有任何人負責這個bundle的情況。
這個方法的邏輯是選出當前這個bundle的owner是哪個broker
主要依靠LeaderElectionService
和LoadManager
選出。
如果選出來的broker是本機的話,則會嘗試獲取這個bundle的ownership。
如果是其他機器的話則會把這個請求轉發給其他機器,請求其他機器來獲取ownership。
private void searchForCandidateBroker(NamespaceBundle bundle, CompletableFuture<Optional<LookupResult>> lookupFuture, LookupOptions options) { ... // 首先會按照一定邏輯來選出這個bundle的可能的broker節點 String candidateBroker = null; ... boolean authoritativeRedirect = les.isLeader(); try { // check if this is Heartbeat or SLAMonitor namespace ... if (candidateBroker == null) { if (options.isAuthoritative()) { // leader broker already assigned the current broker as owner candidateBroker = pulsar.getSafeWebServiceAddress(); } else // 如果這個LeaderElectionService 是leader || // 不是中心化的loadManager(這個是均衡負載用的)|| // 如果當前這個leader的broker還不是active的 if (!this.loadManager.get().isCentralized() || pulsar.getLeaderElectionService().isLeader() // If leader is not active, fallback to pick the least loaded from current broker loadmanager || !isBrokerActive(pulsar.getLeaderElectionService().getCurrentLeader().getServiceUrl()) ) { // 從loadManager選一個負載最輕的broker出來 Optional<String> availableBroker = getLeastLoadedFromLoadManager(bundle); if (!availableBroker.isPresent()) { lookupFuture.complete(Optional.empty()); return; } candidateBroker = availableBroker.get(); authoritativeRedirect = true; } else { // forward to leader broker to make assignment candidateBroker = pulsar.getLeaderElectionService().getCurrentLeader().getServiceUrl(); } } } catch (Exception e) { ... } // 到這里就選出一個候選的broker地址了 try { checkNotNull(candidateBroker); // 如果這個候選broker就是當前機器 if (candidateBroker.equals(pulsar.getSafeWebServiceAddress())) { ... // 這里使用ownerShipCache嘗試獲取這個bundle的ownership ownershipCache.tryAcquiringOwnership(bundle) .thenAccept(ownerInfo -> { ... // 這里就是文章開始的時候說的是否需要load 所有在bundle里面的topic if (options.isLoadTopicsInBundle()) { // Schedule the task to pre-load topics pulsar.loadNamespaceTopics(bundle); } // find the target // 走到這里說明已經把當前的broker作為這個bundle的owner了,直接返回本機的信息給請求者 lookupFuture.complete(Optional.of(new LookupResult(ownerInfo))); return; } }).exceptionally(exception -> { ... }); } else { ... // 這里是把這個lookup 請求轉發給其他broker // Load managed decider some other broker should try to acquire ownership // Now setting the redirect url createLookupResult(candidateBroker, authoritativeRedirect, options.getAdvertisedListenerName()) .thenAccept(lookupResult -> lookupFuture.complete(Optional.of(lookupResult))) .exceptionally(ex -> { lookupFuture.completeExceptionally(ex); return null; }); } } catch (Exception e) { ... } }
這里就是嘗試獲取這個bundle的ownership的邏輯了。
只需要在zk上記錄當前節點的信息就可以了。
(也會有維護這個cache的邏輯)
public CompletableFuture<NamespaceEphemeralData> tryAcquiringOwnership(NamespaceBundle bundle) throws Exception { String path = ServiceUnitZkUtils.path(bundle); CompletableFuture<NamespaceEphemeralData> future = new CompletableFuture<>(); ... LOG.info("Trying to acquire ownership of {}", bundle); // 這里調用的是get,這個方法會觸發cache加載的邏輯。 // Doing a get() on the ownedBundlesCache will trigger an async ZK write to acquire the lock over the // service unit ownedBundlesCache.get(path) .thenAccept(namespaceBundle -> { // 到這里說明已經獲得了這個bundle的ownership了,直接返回。 LOG.info("Successfully acquired ownership of {}", path); namespaceService.onNamespaceBundleOwned(bundle); future.complete(selfOwnerInfo); }).exceptionally(exception -> { // 這里如果加載過程中出現問題(可能是其他人成為了leader) // Failed to acquire ownership if (exception instanceof CompletionException && exception.getCause() instanceof KeeperException.NodeExistsException) { // 確認當前的leader是誰 resolveOwnership(path) .thenAccept(optionalOwnerDataWithStat -> { // 這里會拿到之前成功獲得ownership的節點信息 if (optionalOwnerDataWithStat.isPresent()) { Map.Entry<NamespaceEphemeralData, Stat> ownerDataWithStat = optionalOwnerDataWithStat.get(); NamespaceEphemeralData ownerData = ownerDataWithStat.getKey(); Stat stat = ownerDataWithStat.getValue(); if (stat.getEphemeralOwner() != localZkCache.getZooKeeper().getSessionId()) { LOG.info("Failed to acquire ownership of {} -- Already owned by broker {}", path, ownerData); } // 直接返回即可 future.complete(ownerData); } else { ... }{ }).exceptionally(ex -> { .... }); } else { ... } return null; }); return future; }
這里邏輯比較簡單,序列化本機的連接信息,寫入到這個bundle的path下面就行了
private class OwnedServiceUnitCacheLoader implements AsyncCacheLoader<String, OwnedBundle> { @SuppressWarnings("deprecation") @Override public CompletableFuture<OwnedBundle> asyncLoad(String namespaceBundleZNode, Executor executor) { if (LOG.isDebugEnabled()) { LOG.debug("Acquiring zk lock on namespace {}", namespaceBundleZNode); } byte[] znodeContent; try { znodeContent = jsonMapper.writeValueAsBytes(selfOwnerInfo); } catch (JsonProcessingException e) { // Failed to serialize to JSON return FutureUtil.failedFuture(e); } CompletableFuture<OwnedBundle> future = new CompletableFuture<>(); ZkUtils.asyncCreateFullPathOptimistic(localZkCache.getZooKeeper(), namespaceBundleZNode, znodeContent, Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL, (rc, path, ctx, name) -> { if (rc == KeeperException.Code.OK.intValue()) { if (LOG.isDebugEnabled()) { LOG.debug("Successfully acquired zk lock on {}", namespaceBundleZNode); } ownershipReadOnlyCache.invalidate(namespaceBundleZNode); future.complete(new OwnedBundle( ServiceUnitZkUtils.suBundleFromPath(namespaceBundleZNode, bundleFactory))); } else { // Failed to acquire lock future.completeExceptionally(KeeperException.create(rc)); } }, null); return future; } }
到這里我們已經可以拿到bundle的ownership了。我們看一下之前加載所有topic的邏輯。
PulsarService.loadNamespaceTopics
public void loadNamespaceTopics(NamespaceBundle bundle) { executor.submit(() -> { NamespaceName nsName = bundle.getNamespaceObject(); List<CompletableFuture<Topic>> persistentTopics = Lists.newArrayList(); long topicLoadStart = System.nanoTime(); for (String topic : getNamespaceService().getListOfPersistentTopics(nsName).join()) { try { TopicName topicName = TopicName.get(topic); if (bundle.includes(topicName)) { // 到這里會創建一個Topic對象保存在BrokerService里面 // 這部分后面會說,涉及到 ManagedLedger 里面的初始化 CompletableFuture<Topic> future = brokerService.getOrCreateTopic(topic); if (future != null) { persistentTopics.add(future); } } } ... } ... return null; }); }
NamespaceService.getListOfPersistentTopics
這里就比較容易了
讀取zk的/managed-ledgers/%s/persistent
所有子節點即可。
public CompletableFuture<List<String>> getListOfPersistentTopics(NamespaceName namespaceName) { // For every topic there will be a managed ledger created. String path = String.format("/managed-ledgers/%s/persistent", namespaceName); if (LOG.isDebugEnabled()) { LOG.debug("Getting children from managed-ledgers now: {}", path); } return pulsar.getLocalZkCacheService().managedLedgerListCache().getAsync(path) .thenApply(znodes -> { List<String> topics = Lists.newArrayList(); for (String znode : znodes) { topics.add(String.format("persistent://%s/%s", namespaceName, Codec.decode(znode))); } topics.sort(null); return topics; }); }
到此,相信大家對“Apache Pulsar中TopicLookup請求處理的邏輯是什么”有了更深的了解,不妨來實際操作一番吧!這里是億速云網站,更多相關內容可以進入相關頻道進行查詢,關注我們,繼續學習!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。