Linux   发布时间:2022-04-01  发布网站:大佬教程  code.js-code.com
大佬教程收集整理的这篇文章主要介绍了linux – 堆叠站点上的DRBD磁盘drbd10上的I / O高大佬教程大佬觉得挺不错的,现在分享给大家,也给大家做个参考。

概述

我们有4个Redhat Box Dell PowerEdge R630(比如a,b,c,d)具有以下操作系统/软件包. RedHat EL 6.5 MySql Enterprise 5.6 DRBD 8.4 Corosync 1.4.7 我们设置了4路堆叠drbd资源,如下所示: 群集群集-1:服务器a和b彼此连接本地lan群集群集-2:服务器c和d Cluster Cluster-1和Clust
我们有4个Redhat Box Dell PowerEdge R630(比如a,b,c,d)具有以下操作系统/软件包.

RedHat EL 6.5 MySql Enterprise 5.6 DRBD 8.4 Corosync 1.4.7

我们设置了4路堆叠drbd资源,如下所示:

群集群集-1:服务器a和b彼此连接本地lan群集群集-2:服务器c和d

Cluster Cluster-1和Cluster-2通过堆叠drbd通过虚拟IP连接,是不同数据中心的一部分.

drbd0磁盘已在每台服务器1GB上本地创建,并进一步连接到drbd10.

整体设置包括4层:tomcat前端应用 – > rabbitmq – > memcache – > MysqL的/ DRBD

我们正在经历非常高的磁盘IO,即使活动也不是必须的.但是流量/活动将在几周内@L_450_4@,因此我们担心它会对性能造成非常不利的影响. I / O使用率仅在堆叠的@L_616_6@上高(有时为90%及以上).辅助@L_616_6@没有这个问题.当应用程序理想时,使用率很高.

所以请分享一些建议/调整指南,以帮助我们解决问题;

resource clusterdb {
protocol c;
handlers {
pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notifyemergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notifyemergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergencyshutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
}
startup {
degr-wfc-timeout 120; # 2 minutes.
outdated-wfc-timeout 2; # 2 seconds.
}
disk {
on-io-error detach;
no-disk-barrier;
no-md-flushes;
}

net {
cram-hmac-alg "sha1";
shared-secret "clusterdb";
after-sb-0pri disconnect;
after-sb-1pri disconnect;
after-sb-2pri disconnect;
rr-conflict disconnect;
}

syncer {
rate 10M;
al-extents 257;
 on-no-data-accessible io-error;
 }

 on sever-1 {
 device /dev/drbd0;
 disk /dev/sda2;
 address 10.170.26.28:7788;
 Meta-disk internal;
 }
 on ever-2 {
 device /dev/drbd0;
 disk /dev/sda2;
 address 10.170.26.27:7788;
 Meta-disk internal;
 }
}

堆叠配置: –

resource clusterdb_stacked {
  protocol A;
handlers {
pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notifyemergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notifyemergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergencyshutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
}
startup {
degr-wfc-timeout 120; # 2 minutes.
outdated-wfc-timeout 2; # 2 seconds.
}
disk {
on-io-error detach;
no-disk-barrier;
no-md-flushes;
}

net {
cram-hmac-alg "sha1";
shared-secret "clusterdb";
after-sb-0pri disconnect;
after-sb-1pri disconnect;
after-sb-2pri disconnect;
rr-conflict disconnect;
}

syncer {
rate 10M;
al-extents 257;
 on-no-data-accessible io-error;
 }

  stacked-on-top-of clusterdb {
    device     /dev/drbd10;
    address   10.170.26.28:7788;
  }
 stacked-on-top-of clusterdb_DR {
    device     /dev/drbd10;
    address    10.170.26.60:7788; 
  }
}

请求的数据: –

Date || svctm(w_wait)|| %util
10:32:01 3.07 55.23 94.11
10:33:01 3.29 50.75 96.27
10:34:01 2.82 41.44 96.15
10:35:01 3.01 72.30 96.86
10:36:01 4.52 40.41 94.24
10:37:01 3.80 50.42 83.86
10:38:01 3.03 72.54 97.17
10:39:01 4.96 37.08 89.45
10:41:01 3.55 66.48 70.19
10:45:01 2.91 53.70 89.57
10:46:01 2.98 49.49 94.73
10:55:01 3.01 48.38 93.70
10:56:01 2.98 43.47 97.26
11:05:01 2.80 61.84 86.93
11:06:01 2.67 43.35 96.89
11:07:01 2.68 37.67 95.41

根据评论更新问题: –

它实际上是比较本地和堆叠.

本地服务器之间

[root@pri-site-valsql-a]#ping pri-site-valsql-b
PING pri-site-valsql-b.csn.infra.sm (10.170.24.23) 56(84) bytes of data.
64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=1 ttl=64 time=0.143 ms
64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=2 ttl=64 time=0.145 ms
64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=3 ttl=64 time=0.132 ms
64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=4 ttl=64 time=0.145 ms
64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=5 ttl=64 time=0.150 ms
64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=6 ttl=64 time=0.145 ms
64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=7 ttl=64 time=0.132 ms
64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=8 ttl=64 time=0.127 ms
64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=9 ttl=64 time=0.134 ms
64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=10 ttl=64 time=0.149 ms
64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=11 ttl=64 time=0.147 ms
^C
--- pri-site-valsql-b.csn.infra.sm ping statistics ---
11 packets transmitted,11 received,0% packet loss,time 10323ms
rtt min/avg/max/mdev = 0.127/0.140/0.150/0.016 ms

两个堆叠的服务器之间

[root@pri-site-valsql-a]#ping dr-site-valsql-b
PING dr-site-valsql-b.csn.infra.sm (10.170.24.48) 56(84) bytes of data.
64 bytes from dr-site-valsql-b.csn.infra.sm (10.170.24.48): icmp_seq=1 ttl=64 time=9.68 ms
64 bytes from dr-site-valsql-b.csn.infra.sm (10.170.24.48): icmp_seq=2 ttl=64 time=4.51 ms
64 bytes from dr-site-valsql-b.csn.infra.sm (10.170.24.48): icmp_seq=3 ttl=64 time=4.53 ms
64 bytes from dr-site-valsql-b.csn.infra.sm (10.170.24.48): icmp_seq=4 ttl=64 time=4.51 ms
64 bytes from dr-site-valsql-b.csn.infra.sm (10.170.24.48): icmp_seq=5 ttl=64 time=4.51 ms
64 bytes from dr-site-valsql-b.csn.infra.sm (10.170.24.48): icmp_seq=6 ttl=64 time=4.52 ms
64 bytes from dr-site-valsql-b.csn.infra.sm (10.170.24.48): icmp_seq=7 ttl=64 time=4.52 ms
^C
--- dr-site-valsql-b.csn.infra.sm ping statistics ---
7 packets transmitted,7 received,time 6654ms
rtt min/avg/max/mdev = 4.510/5.258/9.686/1.808 ms
[root@pri-site-valsql-a]#

输出显示高I / O: –

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
drbd0             0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.06    0.00    0.00   99.94

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
drbd0             0.00     0.00    0.00    2.00     0.00    16.00     8.00     0.90    1.50 452.25  90.45

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.25    0.00    0.13    0.50    0.00   99.12

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
drbd0             0.00     0.00    1.00   44.00     8.00   352.00     8.00     1.07    2.90  18.48  83.15

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.13    0.00    0.06    0.25    0.00   99.56

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
drbd0             0.00     0.00    0.00   31.00     0.00   248.00     8.00     1.01    2.42  27.00  83.70

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.19    0.00    0.06    0.00    0.00   99.75

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
drbd0             0.00     0.00    0.00    2.00     0.00    16.00     8.00     0.32    1.50 162.25  32.45

编辑过的属性文件.但仍然没有运气

disk {
on-io-error detach;
no-disk-barrier;
no-disk-flushes;
no-md-flushes;
c-plan-ahead 0;
c-fill-target 24M;
c-min-rate 80M;
c-max-rate 300M;
al-extents 3833;
}

net {
cram-hmac-alg "sha1";
shared-secret "clusterdb";
after-sb-0pri disconnect;
after-sb-1pri disconnect;
after-sb-2pri disconnect;
rr-conflict disconnect;
max-epoch-size 20000;
max-buffers 20000;
unplug-watermark 16;
}

syncer {
rate 100M;
 on-no-data-accessible io-error;
 }

解决方法@H_674_64@
我没有在配置中看到堆叠资源.你也没有提到任何版本号,但看到如此低的范围让我觉得你正在运行古老的东西(8.3.X)或者遵循一些非常古老的指令.

无论如何,假设您正在使用协议A进行堆叠设备的复制(异步),当IO缓冲时,您仍然会快速填充TCP发送缓冲区,因此在缓冲区刷新时会等待IO等待; DRBD需要将其复制的写入放在某处,并且只能在飞行中有这么多未经确认的复制写入.

IO等待有助于系统负载.如果暂时断开堆叠资源,系统负载是否已解决?这是验证这是问题的一种方法.您还可以使用Netstat或ss等查看TCP缓冲区,以查看负载较高时的TCP缓冲区.

除非您的@L_616_6@之间的连接的延迟和吞吐量是惊人的(暗光纤或其他东西),否则您可能需要/想要使用LINBIT中的DRBD代理;它让你使用系统内存缓冲写入.

大佬总结

以上是大佬教程为你收集整理的linux – 堆叠站点上的DRBD磁盘drbd10上的I / O高全部内容,希望文章能够帮你解决linux – 堆叠站点上的DRBD磁盘drbd10上的I / O高所遇到的程序开发问题。

如果觉得大佬教程网站内容还不错,欢迎将大佬教程推荐给程序员好友。

本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
如您有任何意见或建议可联系处理。小编QQ:384754419,请注明来意。