您好,登錄后才能下訂單哦!
這篇文章主要講解了“Rhel7_Oracle11g_rac安裝方法是什么”,文中的講解內容簡單清晰,易于學習與理解,下面請大家跟著小編的思路慢慢深入,一起來研究和學習“Rhel7_Oracle11g_rac安裝方法是什么”吧!
1、禁用selinux
getenforce setenforce 0 vim /etc/selinux/config
2、關閉防火墻、禁止開機啟動
systemctl stop firewalld.service systemctl disable firewalld.service
3、修改主機名
主機名中禁止使用下劃線“_”,建議使用小寫字母,長度小于8位 hostnamectl set-hostname mydb1 hostnamectl set-hostname mydb2 修改完重新登錄
4、配置yum
mount -t iso9660 -o loop /dev/sr0 /media/ cat rhel7.repo [base] name=rhel7.7 baseurl=file:///media enable=1 gpgcheck=0
5、關閉沒必要的服務
Redhat6: service iptables stop service ip6tables stop chkconfig iptables off chkconfig ip6tables off service sshd start chkconfig sshd on service Bluetooth stop chkconfig Bluetooth off service postfix stop chkconfig postfix off service cups stop chkconfig cups off service cpuspeed off chkconfig cpuspeed off service NetworkManager stop chkconfig NetworkManager off service vsftpd stop chkconfig vsftpd off service dhcpd stop chkconfig dhcpd off service nfs stop chkconfig nfs off service nfslock stop chkconfig nfslock off service ypbind stop chkconfig ypbind off Redhat7: .................
5、安裝依賴包
--檢查 rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' \ binutils \ compat-libcap1 \ compat-libstdc++-33 \ gcc \ gcc-c++ \ glibc \ glibc-devel \ ksh \ libstdc++ \ libstdc++-devel \ libaio \ libaio-devel \ make \ sysstat --安裝 yum -y install binutils compat-libstdc++-33 gcc gcc-c++ glibc glibc-common glibc-devel ksh libaio libaio-devel libgcc libstdc++ libstdc++-devel make sysstat openssh-clients compat-libcap1 xorg-x11-utils xorg-x11-xauth elfutils unixODBC unixODBC-devel libXp elfutils-libelf elfutils-libelf-devel smartmontools
6、內核參數修改
--計算方法:cat /proc/sys/fs/file-max + 512 * process * instance number fs.file-max = 6815744 kernel.sem = 250 32000 100 128 kernel.shmmni = 4096 --計算方法:getconf PAGE_SIZE TOTAL RAM IN BYTES / PAGE_SIZE kernel.shmall = ?536870912? --計算方法:HALF OF TOTAL RAM IN BYTES kernel.shmmax = ?1073741824? net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.wmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_max = 1048576 fs.aio-max-nr = 4194304 vm.dirty_ratio=20 vm.dirty_background_ratio=3 vm.dirty_writeback_centisecs=100 vm.dirty_expire_centisecs=500 vm.swappiness=10 vm.min_free_kbytes=524288 #rp_filter,這里假設 eth3 和 eth4 都是私有網卡 net.ipv4.conf.ens39.rp_filter = 2 net.ipv4.conf.ens39.rp_filter = 2 #IP分片匯聚的最大/最小內存用量,計算公式:numCPU *130000,邏輯 cpu 為 96,那么 high參數建議至少設置為 12m 以上。同時 low 參數比 high 參數少 1m 即可。 net.ipv4.ipfrag_high_thresh=16777216 net.ipv4.ipfrag_low_thresh=15728640 net.ipv4.ipfrag_time=60 -----修改后如下: fs.file-max = 6815744 kernel.sem = 250 32000 100 128 kernel.shmmni = 4096 kernel.shmall = 536870912 kernel.shmmax = 1073741824 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.wmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_max = 1048576 fs.aio-max-nr = 4194304 vm.dirty_ratio= 20 vm.dirty_background_ratio= 3 vm.dirty_writeback_centisecs= 100 vm.dirty_expire_centisecs= 500 vm.swappiness= 10 vm.min_free_kbytes= 524288 net.ipv4.conf.ens39.rp_filter = 2 net.ipv4.conf.ens39.rp_filter = 2 net.ipv4.ipfrag_high_thresh=16777216 net.ipv4.ipfrag_low_thresh=15728640 net.ipv4.ipfrag_time=60
7、關閉操作系統透明大頁
redhat7 # cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never] --增加transparent_hugepage=never cat /etc/default/grub GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" GRUB_DEFAULT=saved GRUB_DISABLE_SUBMENU=true GRUB_TERMINAL_OUTPUT="console" GRUB_CMDLINE_LINUX="rd.lvm.lv=myvg/swap rd.lvm.lv=myvg/usr vconsole.font=latarcyrheb-sun16 rd.lvm.lv=myvg/root crashkernel=auto vconsole.keymap=us rhgb quiet transparent_hugepage=never" GRUB_DISABLE_RECOVERY="true" --對grub生效 # grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub configuration file ... Found linux image: /boot/vmlinuz-3.10.0-123.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-123.el7.x86_64.img Found linux image: /boot/vmlinuz-0-rescue- 41c535c189b842eea5a8c20cbd9bff26 Found initrd image: /boot/initramfs-0-rescue- 41c535c189b842eea5a8c20cbd9bff26.img done --關閉tuned服務 # systemctl stop tuned.service # systemctl disable tuned.service --重啟 reboot --確認 cat /sys/kernel/mm/transparent_hugepage/enabled redhat6、redhat7 在/etc/rc.d/rc.local文件中加入下面配置,重啟生效: if test -f /sys/kernel/mm/transparent_hugepage/enabled; then echo never > /sys/kernel/mm/transparent_hugepage/enabled fi if test -f /sys/kernel/mm/transparent_hugepage/defrag; then echo never > /sys/kernel/mm/transparent_hugepage/defrag fi 重啟后確認結果如下: cat /sys/kernel/mm/transparent_hugepage/defrag always [never] cat /sys/kernel/mm/transparent_hugepage/enabled always [never]
8、ntp設置
Redhat6配置: # vi /etc/sysconfig/ntpd # Drop root to id 'ntp:ntp' by default. OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid" #啟動微調模式 # Set to 'yes' to sync hw clock after successful ntpdate SYNC_HWCLOCK=yes #同步硬件bios時間 #vi /etc/ntp.conf server xx.xx.xx.xx prefer #添加ntp服務器地址為首要地址 server 127.127.1.0 iburst#添加本機為次要同步地址 #vi /etc/ntp/step-tickers xx.xx.xx.xx#添加ntp服務器地址,設置在ntp啟動時自動同步時間 配置完畢后,重啟ntpd服務,并查看狀態 chkconfig ntpd on service ntpd restart ntpstat#查看ntpd服務狀態 date #查看時間是否正常 redhat7配置: yum install ntp vi /etc/ntp.conf server 10.5.26.10 iburst vi /etc/sysconfig/ntpd OPTIONS="-x -g" systemctl start ntpd.service systemctl enable ntpd.service systemctl status ntpd.service
9、網卡綁定
red6 touch /etc/modprobe.d/bonding.conf echo "alias bondeth0 bonding" >> /etc/modprobe.d/bonding.conf vi /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 ONBOOT=yes BOOTPROTO=static USERCTL=no NM_CONTROLLED=no IPADDR=10.1.2.3 NETMASK=255.255.255.0 GATEWAY=10.1.2.254 BONDING_OPTS="mode=1 miimon=100" vi /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 ONBOOT=yes BOOTPROTO=none MASTER=bond0 SLAVE=yes USERCTL=no NM_CONTROLLED=no echo "ifenslave bond0 eth0 eth2" >>/etc/rc.d/rc.local 重啟服務器 查看狀態 cat /sys/class/net/bonding_masters cat /sys/class/net/bondeth0/bonding/mode cat /proc/net/bonding/bondeth0 通過拔插網線驗證 red7 nmcli team
10、網絡檢查
(1). 確保節點間通信網絡(私有網絡)是通過單獨的交換機連接,而不是直連。 (2). 確保所有節點的連接到相同網絡的網卡名稱、網絡子網都一樣。比如連接到 public 網絡的網卡名稱都叫 eth0,其 IP 地址子網都是 133.37.x.0,子網掩碼都為 255.255.255.0。 (3). 確保系統中有且只有一個默認路由,并且是通過 public 網絡到達默認路由。 (4). 確保網卡到網絡的帶寬是正確的。 #ethtool eth2 Settings for eth2: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on Supports Wake-on: umbg Wake-on: g Current message level: 0x00000003 (3) Link detected: yes 上面的 Speed: 1000Mb/s 表示連接的實際網絡帶寬為 1000Mbps。雖然交換機、網卡都 是 1000Mbps 或以上帶寬,但有時由于端口問題、網絡線纜問題等原因,實際帶寬 并沒有這么多。 (5). 確保在私有網絡上開啟了多播。可以在 Oracle 官方支持網站上下載 mcasttest.pl 腳本進行檢查
11、存儲多路徑配置
見 https://www.modb.pro/db/14031
12、創建用戶組
groupadd -g 1000 oinstall groupadd -g 1001 dba groupadd -g 1002 oper groupadd -g 1003 asmadmin groupadd -g 1004 asmoper groupadd -g 1005 asmdba useradd -u 1000 -g oinstall -G dba,oper,asmdba oracle useradd -u 1001 -g oinstall -G dba,asmadmin,asmdba,asmoper,oper grid
13、limits限制
touch /etc/security/limits.d/99-grid-oracle-limits.conf grid soft nproc 16384 grid hard nproc 16384 grid soft nofile 10240 grid hard nofile 65536 grid soft stack 10240 grid hard stack 32768 grid soft memlock unlimited grid hard memlock unlimited grid soft core unlimited grid hard core unlimited oracle soft nproc 16384 oracle hard nproc 16384 oracle soft nofile 10240 oracle hard nofile 65536 oracle soft stack 10240 oracle hard stack 32768 oracle soft memlock unlimited oracle hard memlock unlimited oracle soft core unlimited oracle hard core unlimited touch /etc/profile.d/oracle-grid.sh #Setting the appropriate ulimits for oracle and grid user if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -u 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi if [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -u 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi
14、創建目錄
集群軟件BASE目錄:/u01/app/oracle 集群軟件HOME目錄:/u01/app/11.2.0/grid 數據庫軟件BASE目錄:/u01/app/oracle 數據庫軟件HOME目錄:/u01/app/oracle/product/11.2.0/db_home mkdir -p /u01/app/grid mkdir -p /u01/app/11.2.0/grid mkdir -p /u01/app/oraInventory chown -R grid:oinstall /u01/ mkdir -p /u01/app/oracle/product/11.2.0/db_home chown -R oracle:oinstall /u01/app/oracle/ chmod -R 755 /u01
15、設置oracle、grid用戶環境變量
oracle: export TMP=/tmp export TMPDIR=$TMP export ORACLE_SID=racdb1; export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_home export ORACLE_TERM=xterm export PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK export LANG=en_US.UTF-8 umask 022 grid: export ORACLE_SID=+ASM1 export ORACLE_OWNER=grid export ORACLE_BASE=/u01/app/grid export ORACLE_HOME=/u01/app/11.2.0/grid export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib:/lib:/usr/lib:/usr/local/lib PATH=$PATH:$HOME/bin$:$ORACLE_HOME/bin export LANG=en_US.UTF-8 export PATH umask 022
16、修改hosts文件
Public IP:別名直接使用主機名,即uname –a返回的機器名; Private IP: 別名為主機名-priv。 Virtual IP:別名為主機名-vip。 SCAN IP:別名為數據庫名-scan。 #Public Ip 192.168.0.203 mydb1 192.168.0.204 mydb2 #Virtual Ip 192.168.0.205 mydb1-vip 192.168.0.206 mydb2-vip #Private Ip 192.168.124.203 mydb1-priv 192.168.124.204 mydb2-priv #Scan Ip 192.168.0.207 racdb-scan
17、設置互信
安裝grid軟件時使用sshUserSetup.sh快速創建互信, $node1 $node2變量參數需要根據實際節點名稱進行內容調整。 在一個節點上執行即可(可以在root用戶下執行): ./sshUserSetup.sh -user grid -hosts "$node1 $node2" -advanced -exverify –confirm ./sshUserSetup.sh -user oracle -hosts "$node1 $node2" -advanced -exverify -confirm
18、命名規范
1)集群(CLUSTER)的命名規則 Cluster name本身沒有特殊用途,僅在使用其他管理工具統一管理不同RAC實例時有用,不得超過15個字符。 ${DB_NAME}-cls 2)SCAN的命名規則 SCAN 名稱本身沒有特殊用途,僅在使用其他管理工具統一管理時有用,不得超過15個字符。 ${DB_NAME}-scan
19、安裝時檢查忽略選項
1)Package:pdksh-5.2.14 2)Device Checks for AM 3)Task resolv.conf Integrity
20、grid集群軟件安裝,在執行root.sh腳本前,添加ohas服務
touch /usr/lib/systemd/system/ohas.service chmod 777 /usr/lib/systemd/system/ohas.service ohas.service : [Unit] Description=Oracle High Availability Services After=syslog.target [Service] ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple Restart=always [Install] WantedBy=multi-user.target systemctl daemon-reload systemctl enable ohas.service systemctl start ohas.service systemctl status ohas.service
21、在兩個節點分別執行orainstRoot.sh和root.sh腳本
執行順序:1節點orainstRoot.sh,2節點orainstRoot.sh,1節點root.sh,2節點root.sh 1節點: [root@mydb1 system]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@mydb1 system]# /u01/app/11.2.0/grid/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation Installing Trace File Analyzer OLR initialization - successful root wallet root wallet cert root cert export peer wallet profile reader wallet pa wallet peer wallet keys pa wallet keys peer cert request pa cert request peer cert pa cert peer root cert TP profile reader root cert TP pa root cert TP peer pa cert TP pa peer cert TP profile reader pa cert TP profile reader peer cert TP peer user cert pa user cert Adding Clusterware entries to inittab **ohasd failed to start** ##出現該提示需要重啟ohas.service服務 Failed to start the Clusterware. Last 20 lines of the alert log follow: 2020-01-10 08:16:18.019: [client(23496)]CRS-2101:The OLR was formatted using version 3. CRS-2672: Attempting to start 'ora.mdnsd' on 'mydb1' CRS-2676: Start of 'ora.mdnsd' on 'mydb1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'mydb1' CRS-2676: Start of 'ora.gpnpd' on 'mydb1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'mydb1' CRS-2672: Attempting to start 'ora.gipcd' on 'mydb1' CRS-2676: Start of 'ora.cssdmonitor' on 'mydb1' succeeded CRS-2676: Start of 'ora.gipcd' on 'mydb1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'mydb1' CRS-2672: Attempting to start 'ora.diskmon' on 'mydb1' CRS-2676: Start of 'ora.diskmon' on 'mydb1' succeeded CRS-2676: Start of 'ora.cssd' on 'mydb1' succeeded 已成功創建并啟動 ASM。 已成功創建磁盤組OCRDG。 clscfg: -install mode specified Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-4256: Updating the profile Successful addition of voting disk 992a298111ba4fb8bf16c75cdd232ca8. Successfully replaced voting disk group with +OCRDG. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATEFile Universal IdFile Name Disk group -- ------------------------------- --------- 1. ONLINE 992a298111ba4fb8bf16c75cdd232ca8 (/dev/mapper/asm_ocr1p1) [OCRDG] Located 1 voting disk(s). sh: /bin/netstat: 沒有那個文件或目錄 CRS-2672: Attempting to start 'ora.asm' on 'mydb1' CRS-2676: Start of 'ora.asm' on 'mydb1' succeeded CRS-2672: Attempting to start 'ora.OCRDG.dg' on 'mydb1' CRS-2676: Start of 'ora.OCRDG.dg' on 'mydb1' succeeded 軟件包準備中... cvuqdisk-1.0.9-1.x86_64 Configure Oracle Grid Infrastructure for a Cluster ... succeeded 2節點: [root@mydb2 etc]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@mydb2 etc]# [root@mydb2 etc]# cd [root@mydb2 ~]# /u01/app/11.2.0/grid/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation Installing Trace File Analyzer OLR initialization - successful Adding Clusterware entries to inittab **ohasd failed to start** ##出現該提示需要重啟ohas.service服務 Failed to start the Clusterware. Last 20 lines of the alert log follow: 2020-01-10 08:27:01.458: [client(21808)]CRS-2101:The OLR was formatted using version 3. CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node mydb1, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster sh: /bin/netstat: 沒有那個文件或目錄 軟件包準備中... cvuqdisk-1.0.9-1.x86_64 Configure Oracle Grid Infrastructure for a Cluster ... succeeded
22、報錯忽略
1)Configure Oracle Grid Infrastructure for aCluster 2)Oracle Cluster Verification Utility
23、數據庫軟件安裝檢查忽略項
1)Package:pdksh-5.2.14 2)Task resolv.conf Integrity 3)Single Client Access Name(SCAN)
23、安裝報錯
提示:Error in invoking target 'agent nmhs'of makefile '/u01/app/oracle/product/11.2.0/db_home/sysman/lib/ins_emagent.mk'. 解決方法如下: cd $ORACLE_HOME/sysman/lib cp ins_emagent.mk ins_emagent.mk.bak vi ins_emagent.mk /NMECTL 快速定位,修改如下: $(MK_EMAGENT_NMECTL) -lnnz11 說明:第一個是字母l 后面兩個是數字1 然后點擊 Retry
24、root用戶下,在兩個節點分別執行root.sh
/u01/app/oracle/product/11.2.0/db_home/root.sh [root@mydb2 ~]# /u01/app/oracle/product/11.2.0/db_home/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/11.2.0/db_home Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Finished product-specific root actions.
25、使用asmca配置實例安裝所需磁盤
REDODG
DATADG
ARCHDG
26、使用dbca創建實例
27、配置hugepage,大頁內存
使用大內存頁有哪些好處:
1. 減少頁表(Page Table)大小。每一個Huge Page,對應的是連續的2MB物理內存,這樣12GB的物理內存只需要48KB的Page Table,與原來的24MB相比減少很多。 2. Huge Page內存只能鎖定在物理內存中,不能被交換到交換區。這樣避免了交換引起的性能影響。 3. 由于頁表數量的減少,使得CPU中的TLB(可理解為CPU對頁表的CACHE)的命中率大大提高。 4. 針對Huge Page的頁表,在各進程之間可以共享,也降低了Page Table的大小。實際上這里可以反映出Linux在分頁處理機制上的缺陷。而其他操作系統,比如AIX,對于共享內存段這樣的內存,進程共享相同的頁表,避免了Linux的這種問題。像筆者維護的一套系統,連接數平常都是5000以上,實例的SGA在60GB左右,要是按Linux的分頁處理方式,系統中大部分內存都會被頁表給用掉。
實施步驟如下:
1)檢查/proc/meminfo grep -i hugepage /proc/meminfo 2)計算HugePages_Total大小,使用hugepages_settings.sh腳本進行計算: #!/bin/bash KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'` # Find out the HugePage size HPG_SZ=`grep Hugepagesize /proc/meminfo | awk '{print $2}'` # Start from 1 pages to be on the safe side and guarantee 1 free HugePage NUM_PG=1 # Cumulative number of pages required to handle the running shared memory segments for SEG_BYTES in `ipcs -m | awk '{print $5}' | grep "[0-9][0-9]*"` do MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q` if [ $MIN_PG -gt 0 ]; then NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q` fi done # Finish with results case $KERN in '2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`; echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;; '2.6') MEM_LOCK=`echo "$NUM_PG*$HPG_SZ" | bc -q`; echo "Recommended setting within the kernel boot command line(/etc/sysctl.conf): vm.nr_hugepages = $NUM_PG" echo "Recommended setting within /etc/security/limits.d/99-grid-oracle-limits.conf: oracle soft memlock $MEM_LOCK" echo "Recommended setting within /etc/security/limits.d/99-grid-oracle-limits.conf: oracle hard memlock $MEM_LOCK" ;; '3.10') MEM_LOCK=`echo "$NUM_PG*$HPG_SZ" | bc -q`; echo "Recommended setting within the kernel boot command line(/etc/sysctl.conf): vm.nr_hugepages =$NUM_PG" echo "Recommended setting within /etc/security/limits.d/99-grid-oracle-limits.conf: oracle soft memlock $MEM_LOCK" echo "Recommended setting within /etc/security/limits.d/99-grid-oracle-limits.conf: oracle hard memlock $MEM_LOCK" ;; *) echo "Unrecognized kernel version $KERN. Exiting." ;; esac #end ---- 3)修改/etc/sysctl.conf文件,增加如下行,根據上步計算的hugepages大小: vm.nr_hugepages=9218 4)生效 sysctl -p 5)修改/etc/security/limits.d/99-grid-oracle-limits.conf,增加如下,設定oracle用戶可以鎖定內存的大小 ,以KB為單位,可以設置為具體值,也可設置為unlimited: oracle soft memlock unlimited oracle hard memlock unlimited 6)重新啟動實例
28、修改本地和集群監聽端口號為11521
1)修改前確認: [grid@mydb1 ~]$ srvctl config listener Name: LISTENER Network: 1, Owner: grid Home: <CRS home> End points: TCP:1521 [grid@mydb1 ~]$ srvctl config scan_listener SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521 2)在集群下修改listener、scan_listener端口為11521,在一個節點執行 srvctl modify listener -l LISTENER -p "TCP:11521" srvctl modify scan_listener -p 11521 3)修改local_listener,進入sqlplus; alter system set local_listener = '(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.205)(PORT = 11521))' scope=both sid='racdb1'; alter system set local_listener = '(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.206)(PORT = 11521))' scope=both sid='racdb2'; 4)修改remote_listener alter system set remote_listener='racdb-scan:11521' scope=both; 5)在第1個節點關閉本地監聽,修改listener.ora、endpoints_listener.ora、tnsnames.ora,并重啟本地監聽 srvctl stop listener -l LISTENER -n mydb1 cd $ORACLE_HOME/network/admin --修改1521為11521 vi endpoints_listener.ora vi listener.ora srvctl start listener -l LISTENER -n mydb1 srvctl status listener -l LISTENER srvctl config listener 6)在第2個節點關閉本地監聽,修改listener.ora、endpoints_listener.ora、tnsnames.ora,并重啟本地監聽 srvctl stop listener -l LISTENER -n mydb2 cd $ORACLE_HOME/network/admin --修改1521為11521 vi endpoints_listener.ora vi listener.ora srvctl start listener -l LISTENER -n mydb2 srvctl status listener -l LISTENER srvctl config listener 7)修改ASM 監聽端口 (如果不修改asm監聽端口,lsnrctl status查看監聽狀態時不會顯示asm服務監聽狀態) su - grid sqlplus / as sysdba alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.205)(PORT=11521))' scope=both sid='+ASM1'; alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.206)(PORT=11521))' scope=both sid='+ASM2'; lsnrctl status
29、asm參數優化
ASM 磁盤組使用的是默認的 1M AU 大小,對于大型數據庫,這會造成較多的內存占用, 同時對性能略微有些影響,建議對于新增的用于放置數據文件的 ASM 磁盤組,適當調大 AU 大小,比如 4M 或 8M(2 的冪值)。根據電信運營商的實際經驗,建議設置 AU 為 為 4m 。
30、數據庫參數修改推薦
Alter system set resource_manager_plan='FORCE:' scope =spfile sid='*'; Alter system set audit_trail=none scope=spfile sid='*'; alter system set undo_retention=10800 scope=spfile sid='*'; alter system set session_cached_cursors=200 scope=spfile sid='*'; alter system set db_files=2000 scope=spfile sid='*'; alter system set max_shared_servers=0 scope=spfile sid='*'; alter system set sec_max_failed_login_attempts=100 scope=spfile sid='*'; alter system set deferred_segment_creation=false scope=spfile sid='*'; alter system set parallel_force_local=true scope=spfile sid='*'; alter system set parallel_max_servers=32 scope=spfile sid='*'; alter system set sec_case_sensitive_logon=false scope=spfile sid='*'; alter system set open_cursors=3000 scope=spfile sid='*'; alter system set open_links =40 scope=spfile sid='*'; alter system set open_links_per_instance =40 scope=spfile sid='*'; alter system set db_cache_advice=off scope=spfile sid='*'; alter system set "_b_tree_bitmap_plans"=false scope=spfile sid='*'; alter system set "_gc_policy_time"=0 scope=spfile sid='*'; alter system set "_gc_defer_time"=3 scope=spfile sid='*'; alter system set "_lm_tickets"=5000 scope=spfile sid='*'; alter system set "_optimizer_use_feedback"=false sid='*'; alter system set "_undo_autotune"=false scope=both sid='*'; alter system set "_bloom_filter_enabled"=FALSE scope=spfile sid='*'; alter system set "_cleanup_rollback_entries"=2000 scope=spfile sid='*'; alter system set "_px_use_large_pool"=true scope=spfile sid='*'; alter system set "_optimizer_extended_cursor_sharing_rel"=NONE scope=spfile sid='*'; alter system set "_optimizer_extended_cursor_sharing"=NONE scope=spfile sid='*'; alter system set "_optimizer_adaptive_cursor_sharing"=false scope=spfile sid='*'; alter system set "_optimizer_mjc_enabled"=FALSE scope=spfile sid='*'; alter system set "_sort_elimination_cost_ratio"=1 scope=spfile sid='*'; alter system set "_partition_large_extents"=FALSE scope=spfile sid='*'; alter system set "_index_partition_large_extents"=FALSE scope=spfile sid='*'; alter system set "_clusterwide_global_transactions"=FALSE scope=spfile sid='*'; alter system set "_part_access_version_by_number"=FALSE scope=spfile; alter system set "_partition_large_extents"=FALSE scope=spfile; alter system set "_sort_elimination_cost_ratio"=1 scope=spfile; alter system set "_use_adaptive_log_file_sync"=FALSE scope=spfile; alter system set "_lm_sync_timeout"=1200 scope=spfile; alter system set "_ksmg_granule_size"=134217728 scope=spfile; alter system set "_optimizer_cartesian_enabled"=false scope=spfile; alter system set "_external_scn_logging_threshold_seconds"=3600 scope=spfile; alter system set "_datafile_write_errors_crash_instance"=false scope=spfile; alter system set event='28401 TRACE NAME CONTEXT FOREVER, LEVEL 1:60025 trace name context forever:10949 trace name context forever,level 1' sid='*' scope=spfile;
感謝各位的閱讀,以上就是“Rhel7_Oracle11g_rac安裝方法是什么”的內容了,經過本文的學習后,相信大家對Rhel7_Oracle11g_rac安裝方法是什么這一問題有了更深刻的體會,具體使用情況還需要大家實踐驗證。這里是億速云,小編將為大家推送更多相關知識點的文章,歡迎關注!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。