91超碰碰碰碰久久久久久综合_超碰av人澡人澡人澡人澡人掠_国产黄大片在线观看画质优化_txt小说免费全本

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

Greenplum segment節點異常恢復

發布時間:2020-07-29 17:47:11 來源:網絡 閱讀:8384 作者:無鋒劍 欄目:數據庫
Segment檢測及故障切換機制

GP Master首先會檢測Primary狀態,如果Primary不可連通,那么將會檢測Mirror狀態,Primary/Mirror狀態總共有4種:

  1. Primary活著,Mirror活著。GP Master探測Primary成功之后直接返回,進行下一個Segment檢測;
  2. Primary活著,Mirror掛了。GP Master探測Primary成功之后,通過Primary返回的狀態得知Mirror掛掉了(Mirror掛掉之后,Primary將會探測到,將自己變成ChangeTracking模式),這時候更新Master元信息,進行下一個Segment檢測;
  3. Primary掛了,Mirror活著。GP Master探測Primary失敗之后探測Mirror,發現Mirror是活著,這時候更新Master上面的元信息,同時使Mirror接管Primary(故障切換),進行下一個Segment檢測;
  4. Primary掛了,Mirror掛了。GP Master探測Primary失敗之后探測Mirror,Mirror也是掛了,直到重試最大值,結束這個Segment的探測,也不更新Master元信息了,進行下一個Segment檢測。

上面的2-4需要進行gprecoverseg進行segment恢復。

對失敗的segment節點;啟動時會直接跳過,忽略。

[gpadmin@mdw ~]$ gpstart
==≥ gpstart:mdw:gpadmin-[INFO]:-Starting gpstart with args: 
==≥ gpstart:mdw:gpadmin-[INFO]:-Gathering information and validating the environment...
==≥ gpstart:mdw:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 5.0.0 build 1'
==≥ 。。。。。。。。。。。。。。。。。。。。。。。。。。
==≥ gpstart:mdw:gpadmin-[INFO]:-Master Started...
==≥ gpstart:mdw:gpadmin-[INFO]:-Shutting down master
==≥ gpstart:mdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw2 directory /data/gpdata/gpdatam/gpseg0 <<<<<
==≥ gpstart:mdw:gpadmin-[INFO]:---------------------------
==≥ gpstart:mdw:gpadmin-[INFO]:-Master instance parameters
==≥ gpstart:mdw:gpadmin-[INFO]:---------------------------
==≥ gpstart:mdw:gpadmin-[INFO]:-Database                 = template1
==≥ gpstart:mdw:gpadmin-[INFO]:-Master Port              = 1921
==≥ gpstart:mdw:gpadmin-[INFO]:-Master directory         = /data/gpdata/pgmaster/gpseg-1
==≥ gpstart:mdw:gpadmin-[INFO]:-Timeout                  = 600 seconds
==≥ gpstart:mdw:gpadmin-[INFO]:-Master standby           = Off 
==≥ gpstart:mdw:gpadmin-[INFO]:---------------------------------------
==≥ gpstart:mdw:gpadmin-[INFO]:-Segment instances that will be started
==≥ gpstart:mdw:gpadmin-[INFO]:---------------------------------------
==≥ gpstart:mdw:gpadmin-[INFO]:-   Host   Datadir                               Port    Role
==≥ gpstart:mdw:gpadmin-[INFO]:-   sdw1   /data/gpdata/gpdatap/gpseg0   40000   Primary
==≥ gpstart:mdw:gpadmin-[INFO]:-   sdw2   /data/gpdata/gpdatap/gpseg1   40000   Primary
==≥ gpstart:mdw:gpadmin-[INFO]:-   sdw1   /data/gpdata/gpdatam/gpseg1   50000   Mirror

Continue with Greenplum instance startup Yy|Nn (default=N):
> y
==》gpstart:mdw:gpadmin-[INFO]:-Commencing parallel primary and mirror segment instance startup, please wait...
==》
==》gpstart:mdw:gpadmin-[INFO]:-Process results...
==》gpstart:mdw:gpadmin-[INFO]:-----------------------------------------------------
==》gpstart:mdw:gpadmin-[INFO]:-   Successful segment starts                                            = 3
==》gpstart:mdw:gpadmin-[INFO]:-   Failed segment starts                                                = 0
==》gpstart:mdw:gpadmin-[WARNING]:-Skipped segment starts (segments are marked down in configuration)   = 1   <<<<<<<<
==》gpstart:mdw:gpadmin-[INFO]:-----------------------------------------------------
==》gpstart:mdw:gpadmin-[INFO]:-
==》gpstart:mdw:gpadmin-[INFO]:-Successfully started 3 of 3 segment instances, skipped 1 other segments 
==》gpstart:mdw:gpadmin-[INFO]:-----------------------------------------------------
==》gpstart:mdw:gpadmin-[WARNING]:-****************************************************************************
==》gpstart:mdw:gpadmin-[WARNING]:-There are 1 segment(s) marked down in the database
==》gpstart:mdw:gpadmin-[WARNING]:-To recover from this current state, review usage of the gprecoverseg
==》gpstart:mdw:gpadmin-[WARNING]:-management utility which will recover failed segment instance databases.
==》gpstart:mdw:gpadmin-[WARNING]:-****************************************************************************
==》gpstart:mdw:gpadmin-[INFO]:-Starting Master instance mdw directory /data/gpdata/pgmaster/gpseg-1 
==》gpstart:mdw:gpadmin-[INFO]:-Command pg_ctl reports Master mdw instance active
==》gpstart:mdw:gpadmin-[INFO]:-No standby master configured.  skipping...
==》gpstart:mdw:gpadmin-[WARNING]:-Number of segments not attempted to start: 1
==》gpstart:mdw:gpadmin-[INFO]:-Check status of database with gpstate utility51
查看數據庫的mirror的節點啟動狀態
[gpadmin@mdw ~]$ gpstate -m
==》gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: -m
==》gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.0.0 build 1'
==》gpstate:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.0.0 build 1) 
==》gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
==》gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
==》gpstate:mdw:gpadmin-[INFO]:--Current GPDB mirror list and status
==》gpstate:mdw:gpadmin-[INFO]:--Type = Spread
==》gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
==》gpstate:mdw:gpadmin-[INFO]:-   Mirror   Datadir                               Port    Status    Data Status    
==》gpstate:mdw:gpadmin-[WARNING]:-sdw2     /data/gpdata/gpdatam/gpseg0   50000   Failed                   <<<<<<<<
==》gpstate:mdw:gpadmin-[INFO]:-   sdw1     /data/gpdata/gpdatam/gpseg1   50000   Passive   Synchronized
==》gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
==》gpstate:mdw:gpadmin-[WARNING]:-1 segment(s) configured as mirror(s) have failed

可直觀看出“[WARNING]:-sdw2 /data/gpdata/gpdatam/gpseg0 50000 Failed ”

如何恢復這個mirror segment呢?

首先產生一個恢復的配置文件 : gprecoverseg -o ./recov

---- 當然primary segment也是這樣恢復 ----

[gpadmin@mdw ~]$ gprecoverseg -o ./recov
==》gprecoverseg:mdw:gpadmin-[INFO]:-Starting gprecoverseg with args: -o ./recov
==》gprecoverseg:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.0.0 build 1'
==》gprecoverseg:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.0.0 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on ==》
==》gprecoverseg:mdw:gpadmin-[INFO]:-Checking if segments are ready
==》gprecoverseg:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
==》gprecoverseg:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
==》gprecoverseg:mdw:gpadmin-[INFO]:-Configuration file output to ./recov successfully.
查看恢復的配置文件

可以知道哪些segment需要恢復

[gpadmin@mdw ~]$ cat recov
filespaceOrder=fastdisk
sdw2:50000:/data/gpdata/gpdatam/gpseg03
使用配置文件進行恢復 :
[gpadmin@mdw ~]$ gprecoverseg -i ./recov
==》gprecoverseg:mdw:gpadmin-[INFO]:-Starting gprecoverseg with args: -i ./recov
==》gprecoverseg:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.0.0 build 1'
==》gprecoverseg:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.0.0 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on ==》
==》gprecoverseg:mdw:gpadmin-[INFO]:-Checking if segments are ready
==》gprecoverseg:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
==》gprecoverseg:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
==》gprecoverseg:mdw:gpadmin-[INFO]:-Greenplum instance recovery parameters
==》gprecoverseg:mdw:gpadmin-[INFO]:----------------------------------------------------------
==》gprecoverseg:mdw:gpadmin-[INFO]:-Recovery from configuration -i option supplied
==》gprecoverseg:mdw:gpadmin-[INFO]:----------------------------------------------------------
==》gprecoverseg:mdw:gpadmin-[INFO]:-Recovery 1 of 1
==》gprecoverseg:mdw:gpadmin-[INFO]:----------------------------------------------------------
==》gprecoverseg:mdw:gpadmin-[INFO]:-   Synchronization mode                          = Incremental
==》gprecoverseg:mdw:gpadmin-[INFO]:-   Failed instance host                          = sdw2
==》gprecoverseg:mdw:gpadmin-[INFO]:-   Failed instance address                       = sdw2
==》gprecoverseg:mdw:gpadmin-[INFO]:-   Failed instance directory                     = /data/gpdata/gpdatam/gpseg0
==》gprecoverseg:mdw:gpadmin-[INFO]:-   Failed instance port                          = 50000
==》gprecoverseg:mdw:gpadmin-[INFO]:-   Failed instance replication port              = 51000
==》gprecoverseg:mdw:gpadmin-[INFO]:-   Failed instance fastdisk directory            = /data/gpdata/seg1/pg_mir_cdr/gpseg0
==》gprecoverseg:mdw:gpadmin-[INFO]:-   Recovery Source instance host                 = sdw1
==》gprecoverseg:mdw:gpadmin-[INFO]:-   Recovery Source instance address              = sdw1
==》gprecoverseg:mdw:gpadmin-[INFO]:-   Recovery Source instance directory            = /data/gpdata/gpdatap/gpseg0
==》gprecoverseg:mdw:gpadmin-[INFO]:-   Recovery Source instance port                 = 40000
==》gprecoverseg:mdw:gpadmin-[INFO]:-   Recovery Source instance replication port     = 41000
==》gprecoverseg:mdw:gpadmin-[INFO]:-   Recovery Source instance fastdisk directory   = /data/gpdata/seg1/pg_pri_cdr/gpseg0
==》gprecoverseg:mdw:gpadmin-[INFO]:-   Recovery Target                               = in-place
==》gprecoverseg:mdw:gpadmin-[INFO]:-Process results...
==》gprecoverseg:mdw:gpadmin-[INFO]:-Done updating primaries
==》gprecoverseg:mdw:gpadmin-[INFO]:-******************************************************************
==》gprecoverseg:mdw:gpadmin-[INFO]:-Updating segments for resynchronization is completed.
==》gprecoverseg:mdw:gpadmin-[INFO]:-For segments updated successfully, resynchronization will continue in the background.
==》gprecoverseg:mdw:gpadmin-[INFO]:-
==》gprecoverseg:mdw:gpadmin-[INFO]:-Use  gpstate -s  to check the resynchronization progress.
==》gprecoverseg:mdw:gpadmin-[INFO]:-******************************************************************35
查看恢復狀態
[gpadmin@mdw ~]$ gpstate -m
==》gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: -m
==》gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.0.0 build 1'
==》。。。。。。。。。。。。。。。。。。。。。。。。。。
==》gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
==》gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
==》gpstate:mdw:gpadmin-[INFO]:--Current GPDB mirror list and status
==》gpstate:mdw:gpadmin-[INFO]:--Type = Spread
==》gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
==》gpstate:mdw:gpadmin-[INFO]:-   Mirror   Datadir                               Port    Status    Data Status       
==》gpstate:mdw:gpadmin-[INFO]:-   sdw2     /data/gpdata/gpdatam/gpseg0   50000   Passive   Resynchronizing
==》gpstate:mdw:gpadmin-[INFO]:-   sdw1     /data/gpdata/gpdatam/gpseg1   50000   Passive   Synchronized
==》gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------13
primary mirror角色對調

數據庫的主備就恢復了,但是還有一步,是可選的。要不要把primary mirror角色對調一下,因為現在mirror和primary和優先角色是相反的。 如果要對調,使用以下命令,會停庫來處理。

gprecoverseg -r

用于修復Segment的是gprecoverseg。使用方式比較簡單,幾個主要參數如下:
-i :主要參數,用于指定一個配置文件,該配置文件描述了需要修復的Segment和修復后的目的位置。 

-F :可選項,指定后,gprecoverseg會將”-i”中指定的或標記”d”的實例刪除,并從活著的Mirror復制一個完整一份到目標位置。 

-r :當FTS發現有Primary宕機并進行主備切換,在gprecoverseg修復后,擔當Primary的Mirror角色并不會立即切換回來,就會導致部分主機上活躍的Segment過多從而引起性能瓶頸。因此需要恢復Segment原先的角色,稱為re-balance.
向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

托克托县| 那坡县| 连州市| 明溪县| 金阳县| 延安市| 大港区| 华亭县| 阿坝| 化隆| 寿阳县| 威信县| 余干县| 车致| 湘乡市| 健康| 永福县| 炉霍县| 永康市| 眉山市| 石渠县| 西安市| 凌源市| 广水市| 新田县| 平乐县| 虞城县| 防城港市| 新津县| 重庆市| 华池县| 乐东| 康马县| 漯河市| 靖江市| 大城县| 呼和浩特市| 榆树市| 江达县| 伊金霍洛旗| 西吉县|