CentOS7安装并使用zfs 新建RaidZ2阵列

文章目录[隐藏]

前言

为了搞NoKVM重装了好几次了,每次都需要提前配置好RaidZ阵列才行。因为之前一直没有做笔记,只能一次次的查教程,这次做一个笔记吧!
参考文档:https://github.com/zfsonlinux/zfs/wiki/RHEL-and-CentOS

安装ZFS

版本是CentOS7.6,请注意自己的具体版本



yum install epel-release -y && yum localinstall --nogpgcheck http://download.zfsonlinux.org/epel/zfs-release.el7.noarch.rpm -y && yum update -y

按照下面的提示修改文件:/etc/yum.repos.d/zfs.repo,将repo版本改为kmod,也就是标准发行版的kernel



 [zfs]

 name=ZFS on Linux for EL 7 - dkms

 baseurl=http://download.zfsonlinux.org/epel/7/$basearch/

-enabled=1

+enabled=0

 metadata_expire=7d

 gpgcheck=1

 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

@@ -9,7 +9,7 @@

 [zfs-kmod]

 name=ZFS on Linux for EL 7 - kmod

 baseurl=http://download.zfsonlinux.org/epel/7/kmod/$basearch/

-enabled=0

+enabled=1

 metadata_expire=7d

 gpgcheck=1

 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

然后再安装zfs,为了保险起见先删除可能会被安装的错误依赖:



yum remove zfs zfs-kmod spl spl-kmod libzfs2 libnvpair1 libuutil1 libzpool2 zfs-release -y && yum install http://download.zfsonlinux.org/epel/zfs-release.el7_6.noarch.rpm -y && yum update -y && yum install kernel-devel zfs -y

安装成功后重启,注意看可能会有报错,出现报错需要进一步看是不是repo选错了。重启后查看zfs是否启动了,没启动的话启动一下:



[root@NoKVM-EU-1 ~]# lsmod | grep zfs

zfs                  3564425  3 

zunicode              331170  1 zfs

zavl                   15236  1 zfs

icp                   270148  1 zfs

zcommon                73440  1 zfs

znvpair                89131  2 zfs,zcommon

spl                   102412  4 icp,zfs,zcommon,znvpair

#如果没有上面的内容就手动开启

modprobe zfs

设置ZFS的内存限制

ZFS默认把一半的内存作为缓存使用,在大机器中其实不需要那么多,因此设置ZFS缓存大小很重要。



vi /etc/modprobe.d/zfs.conf

#写入以下信息:

# Min 5GB / Max 16GB Limit

options zfs zfs_arc_min=5368709120

options zfs zfs_arc_max=17179869184

然后重启生效,验证一下:



[root@NoKVM-EU-1 ~]# cat /proc/spl/kstat/zfs/arcstats |grep c_

c_min                           4    5368709120

c_max                           4    17179869184

arc_no_grow                     4    0

arc_tempreserve                 4    0

arc_loaned_bytes                4    0

arc_prune                       4    0

arc_meta_used                   4    0

arc_meta_limit                  4    12884901888

arc_dnode_limit                 4    1288490188

arc_meta_max                    4    0

arc_meta_min                    4    16777216

sync_wait_for_async             4    0

arc_need_free                   4    0

arc_sys_free                    4    2109268096

建立zpool冗余阵列RaidZ2

我这里有14块盘,做RaidZ2,有效存储量为12块盘,容错率为2块盘。



zpool create home /dev/sd{b,c,d,e,f,g,h,i,j,k,l,n,m,o}

但是可能会出现报错,这是因为之前有将磁盘添加到zpool中,所以需要先删除无用分区,用fdisk



[root@NoKVM-EU-1 ~]# zpool create home /dev/sd{b,c,d,e,f,g,h,i,j,k,l,n,m,o}

invalid vdev specification

use '-f' to override the following errors:

/dev/sdb1 is part of potentially active pool 'tank'

/dev/sdc1 is part of potentially active pool 'tank'

/dev/sdd1 is part of potentially active pool 'tank'

/dev/sde1 is part of potentially active pool 'tank'

/dev/sdf1 is part of potentially active pool 'tank'

/dev/sdg1 is part of potentially active pool 'tank'

/dev/sdh1 is part of potentially active pool 'tank'

/dev/sdi1 is part of potentially active pool 'tank'

/dev/sdj1 is part of potentially active pool 'tank'

/dev/sdk1 is part of potentially active pool 'tank'

/dev/sdl1 is part of potentially active pool 'tank'

/dev/sdn1 is part of potentially active pool 'tank'

/dev/sdm1 is part of potentially active pool 'tank'

/dev/sdo1 is part of potentially active pool 'tank'

[root@NoKVM-EU-1 ~]# fdisk /dev/sdb

WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them. Be careful before using the write command.

Command (m for help): d Partition number (1,9, default 9): Partition 9 is deleted

Command (m for help): d Selected partition 1 Partition 1 is deleted

Command (m for help): w The partition table has been altered!

Calling ioctl() to re-read partition table. Syncing disks.

然后查看可用的磁盘设备,有没有多出来的盘符,没有就可以重新添加到zpool了



[root@NoKVM-EU-1 ~]# ls -l /dev/sd*

brw-rw---- 1 root disk 8,   0 Jan 13 10:56 /dev/sda

brw-rw---- 1 root disk 8,   1 Jan 13 10:56 /dev/sda1

brw-rw---- 1 root disk 8,   2 Jan 13 10:56 /dev/sda2

brw-rw---- 1 root disk 8,   3 Jan 13 10:56 /dev/sda3

brw-rw---- 1 root disk 8,   4 Jan 13 10:56 /dev/sda4

brw-rw---- 1 root disk 8,   5 Jan 13 10:56 /dev/sda5

brw-rw---- 1 root disk 8,  16 Jan 13 11:02 /dev/sdb

brw-rw---- 1 root disk 8,  32 Jan 13 11:02 /dev/sdc

brw-rw---- 1 root disk 8,  48 Jan 13 11:02 /dev/sdd

brw-rw---- 1 root disk 8,  64 Jan 13 11:03 /dev/sde

brw-rw---- 1 root disk 8,  80 Jan 13 11:03 /dev/sdf

brw-rw---- 1 root disk 8,  96 Jan 13 11:03 /dev/sdg

brw-rw---- 1 root disk 8, 112 Jan 13 11:03 /dev/sdh

brw-rw---- 1 root disk 8, 128 Jan 13 11:03 /dev/sdi

brw-rw---- 1 root disk 8, 144 Jan 13 11:03 /dev/sdj

brw-rw---- 1 root disk 8, 160 Jan 13 11:03 /dev/sdk

brw-rw---- 1 root disk 8, 176 Jan 13 11:03 /dev/sdl

brw-rw---- 1 root disk 8, 192 Jan 13 11:03 /dev/sdm

brw-rw---- 1 root disk 8, 208 Jan 13 11:03 /dev/sdn

brw-rw---- 1 root disk 8, 224 Jan 13 11:04 /dev/sdo

使用zpool status查看一下效果:



[root@NoKVM-EU-1 ~]# zpool status

  pool: home

 state: ONLINE

  scan: none requested

config:

NAME        STATE     READ WRITE CKSUM

home        ONLINE       0     0     0

  sdb       ONLINE       0     0     0

  sdc       ONLINE       0     0     0

  sdd       ONLINE       0     0     0

  sde       ONLINE       0     0     0

  sdf       ONLINE       0     0     0

  sdg       ONLINE       0     0     0

  sdh       ONLINE       0     0     0

  sdi       ONLINE       0     0     0

  sdj       ONLINE       0     0     0

  sdk       ONLINE       0     0     0

  sdl       ONLINE       0     0     0

  sdn       ONLINE       0     0     0

  sdm       ONLINE       0     0     0

  sdo       ONLINE       0     0     0

errors: No known data errors [root@NoKVM-EU-1 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 63G 0 63G 0% /dev tmpfs 63G 0 63G 0% /dev/shm tmpfs 63G 900K 63G 1% /run tmpfs 63G 0 63G 0% /sys/fs/cgroup /dev/sda4 5.3T 1.5G 5.1T 1% / /dev/sda2 488M 219M 244M 48% /boot /dev/sda3 64G 33M 64G 1% /tmp tmpfs 13G 0 13G 0% /run/user/0 home 74T 0 74T 0% /home


This article is under CC BY-NC-SA 4.0 license.
Please quote the original link:https://www.liujason.com/article/474.html
comments powered by Disqus
Built with Hugo
Theme Stack designed by Jimmy