您好,登錄后才能下訂單哦!
本篇內容:
一、Linux下的系統配額
二、Linux下RAID
一、系統磁盤配額
設置和檢查文件系統上的磁盤配額,預防用戶使用超出允許量的空間,還要預防整個文件系統被意外填滿。
綜述
在內核中執行
以文件系統為單位啟用
對不同組或者用戶的策略不同
根據塊或者節點進行限制
軟限制(soft limit):軟限制: 警告限制,可以被突破
硬限制(hard limit):最大可用限制,不可突破;
配額大小:以K為單位,以文件個數為單位
初始化
分區掛載選項:usrquota、grpquota
初始化數據庫:quotacheck
執行
開啟或者取消配額:quotaon、quotaoff
直接編輯配額:edquota username
在shell中直接編輯:
setquota usename 4096 5120 40 50 /foo
定義原始標準用戶
edquota -p user1 user2
報告
用戶調查:quota
配額概述:repquota
其它工具:warnquota
創建一個分區并格式化,然后掛載至/home目錄下,對此目錄實現磁盤配額
1、創建一個10G的分區并且格式化為ext4系統
[root@localhost ~]# fdisk /dev/sdb [root@localhost ~]# lsblk /dev/sdb NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 0 90G 0 disk └─sdb1 8:17 0 10G 0 part [root@localhost ~]# mkfs.ext4 /dev/sdb1
2、將/home目錄下的文件移動到其他目錄,然后將/dev/sdb1掛載至home目錄下,再將數據復制回來。
[root@localhost ~]# mv /home/* /testdir/ [root@localhost ~]# ls /home/ [root@localhost ~]# [root@localhost ~]# mount /dev/sdb1 /home/ //將/dev/sdb1掛載至/home目錄下 [root@localhost ~]# ls /home/ lost+found [root@localhost ~]# mv /testdir/* /home/ //將/home下原來的數據移動回來 [root@localhost ~]# ls /home/ lost+found xiaoshui
3、寫入/etc/fstab,并且將掛載選項設為usrquota,grpquota,因為剛剛沒有寫掛載選項,需要卸載后重新掛載。
[root@localhost ~]# vi /etc/fstab # # /etc/fstab # Created by anaconda on Mon Jul 25 09:34:22 2016 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/vg0-root / ext4 defaults 1 1 UUID=7bbc50de-dfee-4e22-8cb6-04d8520b6422 /boot ext4 defaults 1 2 /dev/mapper/vg0-usr /usr ext4 defaults 1 2 /dev/mapper/vg0-var /var ext4 defaults 1 2 /dev/mapper/vg0-swap swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 1 0 /dev/sdb1 /home ext4 usrquota,grpquota 0 0 [root@localhost ~]# umount /home/ [root@localhost ~]# mount -a
4、創建配額數據庫
[root@localhost ~]# quotacheck -cug /home
5、啟用數據庫
[root@localhost ~]# quotaon -p /home //查看是否啟用數據庫 group quota on /home (/dev/sdb1) is off user quota on /home (/dev/sdb1) is off [root@localhost ~]# quotaon /home //啟用數據庫
6、配置配額項
[root@localhost ~]# edquota xiaoshui //soft為軟限制,hard為硬限制 Disk quotas for user xiaoshui (uid 500): Filesystem blocks soft hard inodes soft hard /dev/sdb1 32 300000 500000 8 0 0
7、切換到xiaoshui用戶,來測試是否上述設置是否生效
[root@localhost ~]# su - xiaoshui //切換用戶只xiaoshui [xiaoshui@localhost ~]$ pwd /home/xiaoshui [xiaoshui@localhost ~]$ dd if=/dev/zero of=file1 bs=1M count=290 //先新建一個290MB的文件 290+0 records in 290+0 records out 304087040 bytes (304 MB) copied, 1.08561 s, 280 MB/s //創建成功,并沒有報錯 [xiaoshui@localhost ~]$ dd if=/dev/zero of=file1 bs=1M count=300 //覆蓋file1,創建一個300MB的文件 sdb1: warning, user block quota exceeded. //出現警告! 300+0 records in 300+0 records out 314572800 bytes (315 MB) copied, 0.951662 s, 331 MB/s [xiaoshui@localhost ~]$ ll -h 查看文件,發現依然創建成功 total 300M -rw-rw-r-- 1 xiaoshui xiaoshui 300M Aug 26 08:16 file1 [xiaoshui@localhost ~]$ dd if=/dev/zero of=file1 bs=1M count=500 //繼續覆蓋file1,創建一個500MB的文件 sdb1: warning, user block quota exceeded. sdb1: write failed, user block limit reached. //出現警告,提示用戶的block到了限制 dd: writing `file1': Disk quota exceeded 489+0 records in 488+0 records out 511967232 bytes (512 MB) copied, 2.344 s, 218 MB/s [xiaoshui@localhost ~]$ ll -h //查看文件,發現并沒有500MB的文件,系統自動dd過程停止了 total 489M -rw-rw-r-- 1 xiaoshui xiaoshui 489M Aug 26 08:19 file1
8、報告配額狀態
[root@localhost ~]# quota xiaoshui //blocks顯示了用戶現在的文件block數,已經超過了limit Disk quotas for user xiaoshui (uid 500): Filesystem blocks quota limit grace files quota limit grace /dev/sdb1 500004* 300000 500000 6days 10 0 0
RAID
一、什么是RAID
RAID:Redundant Arrays of Inexpensive(Independent)Disks,多個磁盤合成一個“陣列”來提供更好的性能、冗余,或者兩者都提供
優點:
提高IO能力:
磁盤并行讀寫
提高耐用性;
磁盤冗余來實現
二、RAID級別
RAID-0:條帶卷,strip
讀、寫性能提升;
可用空間:N*min(S1,S2,...)
無容錯能力
最少磁盤數:2, 2
RAID-1: 鏡像卷,mirror
讀性能提升、寫性能略有下降;
可用空間:1*min(S1,S2,...)
有冗余能力
最少磁盤數:2, 2N
RAID-2
..
RAID-5
讀、寫性能提升
可用空間:(N-1)*min(S1,S2,...)
有容錯能力:有校驗位,允許最多1塊磁盤損壞
最少磁盤數:3, 3+
RAID-6
讀、寫性能提升
可用空間:(N-2)*min(S1,S2,...)
有容錯能力:允許最多2塊磁盤損壞
最少磁盤數:4, 4+
RAID-10
讀、寫性能提升
可用空間:N*min(S1,S2,...)/2
有容錯能力:每組鏡像最多只能壞一塊
最少磁盤數:4, 4+
RAID-01
先RAID0,在RAID1
軟件實現RAID 5
1、準備磁盤分區
[root@localhost ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 80G 0 disk ├─sda1 8:1 0 488M 0 part /boot ├─sda2 8:2 0 40G 0 part / ├─sda3 8:3 0 20G 0 part /usr ├─sda4 8:4 0 512B 0 part ├─sda5 8:5 0 2G 0 part [SWAP] └─sda6 8:6 0 1M 0 part sdb 8:16 0 80G 0 disk └─sdb1 8:17 0 1G 0 part sdc 8:32 0 20G 0 disk sdd 8:48 0 20G 0 disk ├─sdd1 8:49 0 100M 0 part ├─sdd2 8:50 0 100M 0 part ├─sdd3 8:51 0 100M 0 part ├─sdd4 8:52 0 1K 0 part └─sdd5 8:53 0 99M 0 part sde 8:64 0 20G 0 disk └─sde1 8:65 0 100M 0 part sdf 8:80 0 20G 0 disk sr0 11:0 1 7.2G 0 rom
準備四塊磁盤分區,分區類型設為RAID類型(fd),我這里為/dev/sdd{1,2,3},/dev/sde1
2、創建RAID 5
[root@localhost ~]# mdadm -C /dev/md0 -a yes -l 5 -n 3 -x1 /dev/sd{d{1,2,3},e1} mdadm: /dev/sdd1 appears to be part of a raid array: level=raid5 devices=4 ctime=Mon Aug 29 19:43:21 2016 mdadm: /dev/sdd2 appears to be part of a raid array: level=raid0 devices=0 ctime=Thu Jan 1 08:00:00 1970 mdadm: partition table exists on /dev/sdd2 but will be lost or meaningless after creating array Continue creating array? yes mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@localhost ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Sep 1 14:39:53 2016 Raid Level : raid5 Array Size : 202752 (198.03 MiB 207.62 MB) Used Dev Size : 101376 (99.02 MiB 103.81 MB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Thu Sep 1 14:39:54 2016 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : f4eaf910:514ae7ab:6dd6d28f:b6cfcc10 Events : 18 Number Major Minor RaidDevice State 0 8 49 0 active sync /dev/sdd1 1 8 50 1 active sync /dev/sdd2 4 8 51 2 active sync /dev/sdd3 3 8 65 - spare /dev/sde1 [root@localhost ~]#
3、格式化文件系統
[root@localhost ~]# mkfs.ext4 /dev/md0 mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=512 blocks, Stripe width=1024 blocks 50800 inodes, 202752 blocks 10137 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=33816576 25 block groups 8192 blocks per group, 8192 fragments per group 2032 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729 Allocating group tables: done Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done
4、生成配置文件并啟動RAID 5然后掛載
[root@localhost ~]# mdadm -Ds /dev/md0 > /etc/mdadm.conf [root@localhost ~]# mdadm -A /dev/md0 mdadm: /dev/md0 has been started with 3 drives and 1 spare. [root@localhost ~]# mount /dev/md0 /testdir/
5、測試
[root@localhost ~]# cd /testdir/ [root@localhost testdir]# ls lost+found [root@localhost testdir]# cp /etc/* . //向掛載目錄拷貝文件 [root@localhost testdir]# mdadm -D /dev/md0 //查看RAID運行狀態 /dev/md0: Version : 1.2 Creation Time : Thu Sep 1 14:39:53 2016 Raid Level : raid5 Array Size : 202752 (198.03 MiB 207.62 MB) Used Dev Size : 101376 (99.02 MiB 103.81 MB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Thu Sep 1 14:47:58 2016 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : f4eaf910:514ae7ab:6dd6d28f:b6cfcc10 Events : 18 Number Major Minor RaidDevice State //三個分區正常運行 0 8 49 0 active sync /dev/sdd1 1 8 50 1 active sync /dev/sdd2 4 8 51 2 active sync /dev/sdd3 3 8 65 - spare /dev/sde1 [root@localhost testdir]# mdadm /dev/md0 -f /dev/sdd1 //模擬/dev/sdd1損壞 mdadm: set /dev/sdd1 faulty in /dev/md0 [root@localhost testdir]# mdadm -D /dev/md0 //再次查看磁盤狀態 /dev/md0: Version : 1.2 Creation Time : Thu Sep 1 14:39:53 2016 Raid Level : raid5 Array Size : 202752 (198.03 MiB 207.62 MB) Used Dev Size : 101376 (99.02 MiB 103.81 MB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Thu Sep 1 14:51:01 2016 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 1 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : f4eaf910:514ae7ab:6dd6d28f:b6cfcc10 Events : 37 Number Major Minor RaidDevice State 3 8 65 0 active sync /dev/sde1 1 8 50 1 active sync /dev/sdd2 4 8 51 2 active sync /dev/sdd3 0 8 49 - faulty /dev/sdd1 ///dev/sdd1已經標記為損壞,并且之前的/dev/sde1自動補上去了 [root@localhost testdir]# cat fstab //依然正常查看文件 # # /etc/fstab # Created by anaconda on Mon Jul 25 12:06:44 2016 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=136f7cbb-d8f6-439b-aa73-3958bd33b05f / xfs defaults 0 0 UUID=bf3d4b2f-4629-4fd7-8d70-a21302111564 /boot xfs defaults 0 0 UUID=cbf33183-93bf-4b4f-81c0-ea6ae91cd4f6 /usr xfs defaults 0 0 UUID=5e11b173-f7e2-4994-95b9-55cc4c41f20b swap swap defaults 0 0 [root@localhost testdir]# mdadm /dev/md0 -r /dev/sdd1 //模擬拔出硬盤 [root@localhost testdir]# mdadm -D /dev/md0 ..............................省略........................ Number Major Minor RaidDevice State 3 8 65 0 active sync /dev/sde1 //已經找不到/dev/sdd1 1 8 50 1 active sync /dev/sdd2 4 8 51 2 active sync /dev/sdd3 [root@localhost testdir]# [root@localhost testdir]# mdadm /dev/md0 -a /dev/sdd1 //模擬增加硬盤 mdadm: added /dev/sdd1 [root@localhost testdir]# mdadm -D /dev/md0 //查看信息 ..............................省略........................ Number Major Minor RaidDevice State 3 8 65 0 active sync /dev/sde1 1 8 50 1 active sync /dev/sdd2 4 8 51 2 active sync /dev/sdd3 5 8 49 - spare /dev/sdd1 /// /dev/sdd1成功增加 [root@localhost testdir]# mdadm /dev/md0 -f /dev/sdd2 //因為RAI5 中有校驗位,允許其中的一塊磁盤損壞掉,通過校驗位可以把原來的數據文件給推算出來 mdadm: set /dev/sdd2 faulty in /dev/md0 [root@localhost testdir]# mdadm -D /dev/md0 ..............................省略........................ Number Major Minor RaidDevice State 5 8 49 0 active sync /dev/sdd1 2 0 0 2 removed 4 8 51 2 active sync /dev/sdd3 1 8 50 - faulty /dev/sdd2 3 8 65 - faulty /dev/sde1 //這種情況下依然能查看數據文件,不過性能會下降,所以需要立即修復
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。