您好,登錄后才能下訂單哦!
本篇內容主要講解“greenplum集群的搭建過程”,感興趣的朋友不妨來看看。本文介紹的方法操作簡單快捷,實用性強。下面就讓小編來帶大家學習“greenplum集群的搭建過程”吧!
本次環境一共四臺虛擬機,一臺為master,三臺為segment節點,其中segment3節點為standby master。 主機名:gpms,gps1,gps2,gps3
版本信息,redhat7.3+gp5.16
--系統參數 cat <<EOF >>/etc/sysctl.conf #add by xyy for greenplum 20181016 kernel.shmmax = 500000000 kernel.shmmni = 4096 kernel.shmall = 4000000000 kernel.sem = 500 1024000 200 4096 kernel.sysrq = 1 kernel.core_uses_pid = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.msgmni = 2048 net.ipv4.tcp_syncookies = 1 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_max_syn_backlog = 4096 net.ipv4.conf.all.arp_filter = 1 net.ipv4.ip_local_port_range = 10000 65535 net.core.netdev_max_backlog = 10000 net.core.rmem_max = 2097152 net.core.wmem_max = 2097152 vm.overcommit_memory = 2 vm.overcommit_memory = 2 vm.swappiness = 10 vm.dirty_expire_centisecs = 500 vm.dirty_writeback_centisecs = 100 vm.dirty_background_ratio = 0 vm.dirty_ratio=0 vm.dirty_background_bytes = 1610612736 vm.dirty_bytes = 4294967296 EOF --資源限制 vi /etc/security/limits.conf * soft nofile 65536 * hard nofile 65536 * soft nproc 131072 * hard nproc 131072 * soft core unlimited
groupdel gpadmin userdel gpadmin groupadd -g 530 gpadmin useradd -g 530 -u 530 -m -d /home/gpadmin -s /bin/bash gpadmin chown -R gpadmin:gpadmin /home/gpadmin passwd gpadmin mkdir /opt/greenplum chown -R gpadmin:gpadmin /opt/greenplum --hosts 192.168.80.161 gpms 192.168.80.162 gps1 192.168.80.163 gps2 192.168.80.164 gps3
su - gpadmin /opt/greenplum/greenplum-db ./greenplum-db-5.16.0-rhel7-x86_64.bin source /opt/greenplum/greenplum-db/greenplum_path.sh [gpadmin@gptest conf]$ pwd /home/gpadmin/conf [gpadmin@gptest conf]$ cat hostlist gpms gps1 gps2 gps3 [gpadmin@gptest conf]$ cat seg_hosts gps1 gps2 gps3 [gpadmin@gptest conf]$
--ssh 互信 gpssh-exkeys -f hostlist --批量操作命令 gpssh -f hostlist --打包 tar -cvf gp5.6.tar greenplum-db-5.16.0/ gpscp -f /home/gpadmin/conf/seg_hosts gp5.6.tar =:/opt/greenplum/ gpssh -f seg_hosts cd /opt/gr* tar -xvf gp5.6.tar ln -s greenplum-db-5.16.0 greenplum-db --創建相關目錄 gpssh -f hostlist mkdir -p /home/gpadmin/gpdata/gpmaster mkdir -p /home/gpadmin/gpdata/gpdatap1 mkdir -p /home/gpadmin/gpdata/gpdatap2 mkdir -p /home/gpadmin/gpdata/gpdatam1 mkdir -p /home/gpadmin/gpdata/gpdatam2 --配置環境變量 echo "source /opt/greenplum/greenplum-db/greenplum_path.sh" >> /home/gpadmin/.bash_profile echo "export MASTER_DATA_DIRECTORY=/home/gpadmin/gpdata/gpmaster/gpseg-1" >> /home/gpadmin/.bash_profile echo "export PGPORT=2345" >> /home/gpadmin/.bash_profile echo "export PGDATABASE=testdb" >> /home/gpadmin/.bash_profile
cd /opt/greenplum/greenplum-db/docs/cli_help/gpconfigs [gpadmin@gptest conf]$ vi gpinitsystem_config [gpadmin@gptest conf]$ cat gpinitsystem_config | grep -v '#' | grep -v '^$' ARRAY_NAME="Greenplum Data Platform" #數據節點名稱前綴 SEG_PREFIX=gpseg #primary 起始端口號 PORT_BASE=33000 #primary 數據目錄 declare -a DATA_DIRECTORY=(/home/gpadmin/gpdata/gpdatap1 /home/gpadmin/gpdata/gpdatap2) #master所在主機 MASTER_HOSTNAME=gpms #master數據目錄 MASTER_DIRECTORY=/home/gpadmin/gpdata/gpmaster MASTER_PORT=2345 TRUSTED_SHELL=/usr/bin/ssh CHECK_POINT_SEGMENTS=8 ENCODING=UNICODE #mirror 起始端口號 MIRROR_PORT_BASE=43000 #primary segment 主備同步的起始端口號 REPLICATION_PORT_BASE=34000 #mirror segment主備同步的起始端口號 MIRROR_REPLICATION_PORT_BASE=44000 #mirror segment數據目錄 declare -a MIRROR_DATA_DIRECTORY=(/home/gpadmin/gpdata/gpdatam1 /home/gpadmin/gpdata/gpdatam2) --初始化數據庫 gpinitsystem -c gpinitsystem_config -h seg_hosts -s gps3 -S
--create select * from pg_filespace; create tablespace tbs_siling filespace siling_fs; select a.spcname,b.fsname from pg_tablespace a,pg_filespace b where spcfsoid=b.oid; 創建 數據庫 與 用戶 并 授權 create database testdb tablespace tbs_siling;; create user testuser password 'testuser'; grant all on database testdb to testuser; select rolname,oid from pg_roles; --設置用戶的 表空間 及 授權 alter user testuser set default_tablespace='tbs_siling'; grant all on tablespace tbs_siling to testuser; --創建 模式 并 授權 create schema siling_mode; grant all on schema siling_mode to testuser; --啟停數據庫 gpstart -a gpstop -a --遠程連接數據庫 --修改密碼 alter role gpadmin with password 'gpadmin'; host all all 192.168.80.0/0 md5 gpstop -u psql -h 192.168.80.161 -d testdb -p 2345 --greenplum 數據分布在所有segment上,當查詢數據時,master展現的數據時限接收到的數據順序,每個segment的數據到達master的順序是隨機的。所以select順序也是隨機的。 select gp_segment_id ,count(*) from test2020 group by gp_segment_id; --集群節點分布情況 mode:s 表示已同步,r重新同步,c不同步。 status:u up d down select * from gp_segment_configuration;
到此,相信大家對“greenplum集群的搭建過程”有了更深的了解,不妨來實際操作一番吧!這里是億速云網站,更多相關內容可以進入相關頻道進行查詢,關注我們,繼續學習!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。