91超碰碰碰碰久久久久久综合_超碰av人澡人澡人澡人澡人掠_国产黄大片在线观看画质优化_txt小说免费全本

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

數據庫數據源監控的類型有哪些

發布時間:2021-09-27 09:23:20 來源:億速云 閱讀:193 作者:柒染 欄目:大數據

本篇文章為大家展示了數據庫數據源監控的類型有哪些,內容簡明扼要并且容易理解,絕對能使你眼前一亮,通過這篇文章的詳細介紹希望你能有所收獲。

1.Druid監控

問題:
眾所周知,alibaba druid提供了比較完善的數據庫監控,但是也是有比較明顯的劣勢(比如:數據源的連接數等在監控頁面只能看到那瞬間的值等),不能持久化監控以及和公司內部監控告警集成
解決:
通過內部druid監控方法

private class DruidStatsThread extends Thread {

    public DruidStatsThread(String name) {
        super(name);
        this.setDaemon(true);
    }

    @Override
    public void run() {
        long initialDelay = metricDruidProperties.getInitialDelay() * 1000;
        if (initialDelay > 0) {
            MwThreadUtil.sleep(initialDelay);
        }
        while (!this.isInterrupted()) {
            try {
                try {
                    Set<DruidDataSource> druidDataSources =
                            DruidDataSourceStatManager.getDruidDataSourceInstances();
                    Optional.ofNullable(druidDataSources).ifPresent(val -> val.forEach(druidDataSource -> {
                        DruidDataSourceStatValue statValue = druidDataSource.getStatValueAndReset();
                        long maxWaitMillis = druidDataSource.getMaxWait();//最大等待時間
                        long waitThreadCount = statValue.getWaitThreadCount();//當前等待獲取連接的線程數
                        long notEmptyWaitMillis = statValue.getNotEmptyWaitMillis();//獲取連接時累計等待多長時間
                        long notEmptyWaitCount = statValue.getNotEmptyWaitCount();//獲取連接時累計等待多少次'

                        int maxActive = druidDataSource.getMaxActive();//最大活躍數
                        int poolingCount = statValue.getPoolingCount();//當前連接池數
                        int poolingPeak = statValue.getPoolingPeak();//連接池峰值
                        int activeCount = statValue.getActiveCount();//當前活躍連接數
                        int activePeak = statValue.getActivePeak();//活躍數峰值

                        if (Objects.nonNull(statsDClient)) {
                            URI jdbcUri = parseJdbcUrl(druidDataSource.getUrl());
                            Optional.ofNullable(jdbcUri).ifPresent(val2 -> {
                                String host = StringUtils.replaceChars(val2.getHost(), '.', '_');
                                String prefix = METRIC_DRUID_PREFIX + host + '.' + val2.getPort() + '.';
                                statsDClient.recordExecutionTime(prefix + "maxWaitMillis", maxWaitMillis);
                                statsDClient.recordExecutionTime(prefix + "waitThreadCount", waitThreadCount);
                                statsDClient.recordExecutionTime(prefix + "notEmptyWaitMillis", notEmptyWaitMillis);
                                statsDClient.recordExecutionTime(prefix + "notEmptyWaitCount", notEmptyWaitCount);
                                statsDClient.recordExecutionTime(prefix + "maxActive", maxActive);
                                statsDClient.recordExecutionTime(prefix + "poolingCount", poolingCount);
                                statsDClient.recordExecutionTime(prefix + "poolingPeak", poolingPeak);
                                statsDClient.recordExecutionTime(prefix + "activeCount", activeCount);
                                statsDClient.recordExecutionTime(prefix + "activePeak", activePeak);
                            });
                        } else {
                            druidDataSource.logStats();
                        }
                    }));
                } catch (Exception e) {
                    logger.error("druid stats exception", e);
                }
                TimeUnit.SECONDS.sleep(metricDruidProperties.getStatsInterval());
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
                logger.info("metric druid interrupt exit...");
            } catch (Exception e) {
                logger.error("metric druid exception...", e);
            }
        }
    }
}

private URI parseJdbcUrl(String url) {
    if (StringUtils.isBlank(url) || !StringUtils.startsWith(url, "jdbc:")) {
        return null;
    }
    String cleanURI = url.substring(5);
    return URI.create(cleanURI);
}
2.Hikari監控

問題:
針對Hikari數據源,沒有統一的監控處理,但是,提供了JMX入口,同理,持久化在監控服務上
解決:

private class HikariStatsThread extends Thread {

    public HikariStatsThread(String name) {
        super(name);
        this.setDaemon(true);
    }

    @Override
    public void run() {
        long initialDelay = metricHikariProperties.getInitialDelay() * 1000;
        if (initialDelay > 0) {
            MwThreadUtil.sleep(initialDelay);
        }
        while (!this.isInterrupted()) {
            try {
                Optional.ofNullable(hikariDataSources).ifPresent(val -> val.forEach(hikariDataSource -> {
                    URI jdbcUri = parseJdbcUrl(hikariDataSource.getJdbcUrl());
                    Optional.ofNullable(jdbcUri).ifPresent(val2 -> {
                        String host = StringUtils.replaceChars(val2.getHost(), '.', '_');
                        String prefix = METRIC_HIKARI_PREFIX + host + '.' + val2.getPort() + '.';

                        PoolStatBean poolStatBean = PoolStatBean.builder().build();
                        HikariPoolMXBean hikariPoolMXBean = hikariDataSource.getHikariPoolMXBean();
                        Optional.ofNullable(hikariPoolMXBean).ifPresent(val3 -> {
                            int activeConnections = val3.getActiveConnections();
                            int idleConnections = val3.getIdleConnections();
                            int totalConnections = val3.getTotalConnections();
                            int threadsAwaitingConnection = val3.getThreadsAwaitingConnection();
                            poolStatBean.setActiveConnections(activeConnections);
                            poolStatBean.setIdleConnections(idleConnections);
                            poolStatBean.setTotalConnections(totalConnections);
                            poolStatBean.setThreadsAwaitingConnection(threadsAwaitingConnection);
                        });
                        HikariConfigMXBean hikariConfigMXBean = hikariDataSource.getHikariConfigMXBean();
                        Optional.ofNullable(hikariConfigMXBean).ifPresent(val3 -> {
                            int maximumPoolSize = val3.getMaximumPoolSize();
                            int minimumIdle = val3.getMinimumIdle();
                            poolStatBean.setMaximumPoolSize(maximumPoolSize);
                            poolStatBean.setMinimumIdle(minimumIdle);
                        });
                        statsPool(prefix, poolStatBean);
                    });
                }));
                TimeUnit.SECONDS.sleep(metricHikariProperties.getStatsInterval());
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
                logger.info("metric hikari interrupt exit...");
            } catch (Exception e) {
                logger.error("metric hikari exception...", e);
            }
        }
    }
}

private void statsPool(String prefix, PoolStatBean poolStatBean) {
    if (Objects.nonNull(statsDClient)) {
        statsDClient.recordExecutionTime(prefix + "activeConnections", poolStatBean.getActiveConnections());
        statsDClient.recordExecutionTime(prefix + "idleConnections", poolStatBean.getIdleConnections());
        statsDClient.recordExecutionTime(prefix + "totalConnections", poolStatBean.getTotalConnections());
        statsDClient.recordExecutionTime(prefix + "threadsAwaitingConnection",
                poolStatBean.getThreadsAwaitingConnection());
        statsDClient.recordExecutionTime(prefix + "maximumPoolSize", poolStatBean.getMaximumPoolSize());
        statsDClient.recordExecutionTime(prefix + "minimumIdle", poolStatBean.getMinimumIdle());
        return;
    }
    StringBuilder sBuilder = new StringBuilder(16);
    sBuilder.append(prefix + "activeConnections => [" + poolStatBean.getActiveConnections() + "],");
    sBuilder.append(prefix + "idleConnections => [" + poolStatBean.getIdleConnections() + "],");
    sBuilder.append(prefix + "totalConnections => [" + poolStatBean.getTotalConnections() + "],");
    sBuilder.append(prefix + "threadsAwaitingConnection => [" + poolStatBean.getThreadsAwaitingConnection() + "],");
    sBuilder.append(prefix + "maximumPoolSize => [" + poolStatBean.getMaximumPoolSize() + "],");
    sBuilder.append(prefix + "minimumIdle => [" + poolStatBean.getMinimumIdle() + "]");
    logger.info(sBuilder.toString());
}

private URI parseJdbcUrl(String url) {
    if (StringUtils.isBlank(url) || !StringUtils.startsWith(url, "jdbc:")) {
        return null;
    }
    String cleanURI = url.substring(5);
    return URI.create(cleanURI);
}

@Data
@Builder
private static class PoolStatBean {
    private int activeConnections;
    private int idleConnections;
    private int totalConnections;
    private int threadsAwaitingConnection;
    private int maximumPoolSize;
    private int minimumIdle;
}

注:以上只是提供一種解決方案

上述內容就是數據庫數據源監控的類型有哪些,你們學到知識或技能了嗎?如果還想學到更多技能或者豐富自己的知識儲備,歡迎關注億速云行業資訊頻道。

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

沽源县| 满洲里市| 香河县| 噶尔县| 武陟县| 凤台县| 万荣县| 蓬安县| 东乌珠穆沁旗| 石城县| 彰化市| 濉溪县| 大竹县| 务川| 黑山县| 扬中市| 彭州市| 阿鲁科尔沁旗| 北辰区| 昆山市| 汾西县| 河东区| 平遥县| 搜索| 贵州省| 彭山县| 余干县| 彭泽县| 连南| 通海县| 仪征市| 临漳县| 安龙县| 原阳县| 阆中市| 高平市| 洛扎县| 军事| 濉溪县| 文昌市| 治多县|