• [技术干货] KFC最后总结
    1、Mariadbvi /etc/my.conflower_case_table_names=1  #设置数据库支持大小写Innodb_buffer_pool_size=4G  #设置数据库缓冲大小为4GInnodb_log_buffer_size=64M   #设置数据库的log buffer 为64MInnodb_log_file_size=256M    #设置数据库的redo log 文件大小为256MInnodb_log_files_in_group=2   #设置数据库的redo log 文件组为2 vi /etc/sysconfig/memcachedMAXCONN="2048"    #最大连接数改为2048CACHESIZE="512"   #缓存大小改为512hash_algorithm=md5   #数据库hash算法改为md5 Mysql使用mysql -uroot -p000000  #登录数据库create database test;    #创建库use test;    #使用库create table company( #创建表id int not null primary key,name varchar(50),addr varchar(255));测试company表的使用insert into company value(1,”huawei”,”china”);  #向company插入数据Select * from company;    #查看company中的数据exitrabbitmqctl add_user chinaskill chinapd  #使用rabbit创建用户rabbitmqctl set_user_tags chinaskill administrator #设置用户chinaskill为管理员rabbitmqctl list_users  #查看所有用户 2、NovaVi /etc/nova/nova.confcpu_allocation_ratio=4.0# cpu超售ram_allocation_ratio=1.5# 内存超售reserved_host_memory_mb=2048# 预留内存reserved_host_disk_mb=10240# 预留磁盘rervice_down_time=120# 设置nova服务心跳检查时间为120秒 remove_unused_base_images=true#长时间不用的镜像缓存在过一定的时间后会被自动删除 3、Glance①glance镜像给项目共享使用openstack image create --disk-format qcow2 --progress --file chTab 自定义镜像名Openstack project listOpenstack image list Openstack image add project 镜像id 项目idopenstack image member list 镜像名/镜像id ②Glance镜像共享状态设为acceptedOpenstack image set shared 镜像id ③glance镜像压缩qemu-img -c -O qcow2 /镜像路径 /压缩后存放的路径和名称.qcow2 4、KeystoneOpenstack domain create 210Demo  #创建域Openstack group create devps --domain 210Demo  #在域中创建组Openstack project create Production --domain 210Demo  #在域中创建项目Openstack user create --domain 210Demo --project Production xiaoming --email com  #在对应的项目中创建用户Openstack role add --user xiaoming --project Production member #写入用户为某个项目的用户Openstack role add --user xiaoming --project Production admin  #写入用户为某个项目的管理员 Keystone权限控制(修改普通用户权限,使普通用户不能对镜像进行创建和删除操作)Vi /etc/keystone/policy.json将  "add_image": "","delete_image": ""修改为"add_image": "role:admin","delete_image": "role:admin" 5、Redis一主二从三哨兵# 修改redis1节点配置文件[root@redis1 ~]# vi /etc/redis.conf bind 127.0.0.1   改为bind 0.0.0.0   # 开放所有端口protected-mode no  # 修改为nodaemonize yes      # 改为yes# requirepass foobared 下方添加:requirepass "123456"# masterauth <master-password> 下方添加:masterauth "123456"[root@redis1 ~]# systemctl restart redis # 修改redis2、redis3节点[root@redis2 ~]# vi /etc/redis.confbind 127.0.0.1   改为bind 0.0.0.0   # 开放所有端口protected-mode no # 改为nodaemonize yes # 改为yes# requirepass foobared 下方添加:requirepass "123456"# masterauth <master-password> 下方添加:masterauth "123456"# slaveof <masterip> <masterport> 下方添加:slaveof 192.168.100.115 6379[root@redis2 ~]# systemctl restart redisredis3 同理 [root@redis1 ~]# systemctl restart redis             [root@redis1 ~]# redis-cli -a 123456 info replicationconnected_slaves:2 [root@redis1 ~]# vi /etc/sentinel.confport 26379protected-mode no  # 保护模式daemonize yessentinel monitor redis1 ip 6379 2sentinel auth-pass redis1 123456 [root@redis2 ~]# vi /etc/sentinel.confport 26380protected-mode nodaemonize yessentinel monitor redis1 ip 2sentinel auth-pass redis1 123456 [root@redis3 ~]# vi /etc/sentinel.confport 26381protected-mode nodaemonize yes sentinel monitor redis1 ip 2sentinel auth-pass redis1 123456 # 启动哨兵模式[root@redis1 ~]# redis-server /etc/sentinel.conf --sentinel[root@redis2 ~]# redis-server /etc/sentinel.conf --sentinel[root@redis3 ~]# redis-server /etc/sentinel.conf --sentinel 6、LinuxLinux系统调优-脏数据回写sysctl -a |grep 3000 vi /etc/sysctl.confvm.dirty_expire_centisecs = 6000[root@controller ~]# sysctl -pnet.ipv4.icmp_echo_ignore_all = 0vm.dirty_expire_centisecs = 6000如果/etc/sysctl.conf里面没有,就用echo写进去echo "vm.dirty_expire_centisecs=6000" >> /etc/sysctl.conf  Linux系统调优-swap限制Sysctl -a |grep swappinessecho "vm.swappiness = 20" >> /etc/sysctl.confSysctl -p Linux系统调优-防止SYN攻击Sysctl -a |grep synnet.ipv4.tcp_syncookies = 1echo "net.ipv4.tcp_syncookies = 1" >> /etc/sysctl.confSysctl -p Linux网络命名空间运维首先创建test1和test2Ip netns add test1Ip netns add test2启动test1和test2IP netns exec test1 ip link set dev lo upIp netns exec test2 ip link set dev lo up创建veth并联通Ip link add veth-test1 type veth peer name veth-test2将veth分配给test1和test2Ip link set veth-test1 netns test1Ip link set veth-test2 netns test2给veth分配IP地址Ip netns exec test1 ip addr add ip1/24 dev veth-test1Ip netns exec test2 ip addr add ip2/24 dev veth-test2启动vethIP netns exec test1 ip link set dev veth-test1 upIp netns exec test2 ip link set dev veth-test2 up互相ping通IP netns exec test1 ping ip.2Ip netns exec test2 ping ip.1          
  • [技术干货] KFC开发
    ## 四、OpenStack运维开发任务### 1、OpenStack Python运维脚本开发:使用Restful API方式创建用户(3分) HYPERLINK "undefined" 在提供的OpenStack私有云平台上,使用T版本的“openstack-python-dev”镜像创建1台云主机,云主机类型使用4vCPU/12G内存/100G硬盘。该主机中已经默认安装了所需的开发环境,登录默认账号密码为“root/Abc\@1234”。使用python request库和OpenStack Restful APIs,在/root目录下,创建api\_manager\_identity.py文件,编写python代码,代码实现以下任务:(1)首先实现查询用户,如果用户名称“user\_demo”已经存在,先删除。(2)如果不存在“user\_demo”,创建该用户,密码设置为“1DY\@2022”。(3)创建完成后,查询该用户信息,查询的body部分内容控制台输出,同时json格式的输出到文件当前目录下的user\_demo.json文件中,json格式要求indent=4。编写完成后,提交allinone节点的用户名、密码和IP地址到答题框。```python[root@python ~]# source /etc/keystone/admin-openrc.sh [root@python ~]# openstack token issueimport requests, jsonurl = ':5000/v3/auth/tokens'headers = {'Content-Type': 'application/json', 'X-Auth-token': ''}rsp = requests.get('http://controller:5000/v3/users', headers=headers)for i in rsp.json()['users']:    if 'user_demo' == i['name']:        requests.delete('http://controller:5000/v3/users/{}'.format(i['id']), headers=headers)data = {'user': {'name': 'user_demo', 'password': '1DY@2022'}}rsp = requests.post('http://controller:5000/v3/users', headers=headers, data=json.dumps(data))print(rsp.json())with open('user_demo.json', 'w') as outfile:    json.dump(rsp.json(), outfile, indent=4)  [root@python ~]# python api_manager_identity.py```### 2、使用python调用api实现创建flavor(2分)在自行搭建的OpenStack私有云平台或提供的all-in-one平台上。在controller节点的/root目录下创建create\_flavor.py文件,在该文件中编写python代码对接openstack api,要求在openstack私有云平台上创建一个云主机类型,名字为pvm\_flavor、vcpu为1个、内存为1024m、硬盘为20G、ID为9999。(如果存在同名云主机类型,代码中需先进行删除操作)执行完代码要求输出“云主机类型创建成功”。根据上述要求编写python代码,完成后,将controller节点的IP地址,用户名和密码提交。(考试系统会连接到你的controller节点,去执行python脚本,请准备好运行的环境,以便考试系统访问)```python[root@python ~]# openstack token issue[root@python ~]# vi create_flavor.pyimport requests, jsonurl = ':5000/v3/auth/tokens'headers = {'Content-Type': 'application/json', 'X-Auth-Token': 'gAAAAABj2jXucfyBBIRkXQsQHuU1KDbS1bIQaugdyJy3liE87SvU8uVudo0Qc_99zprcXBKBnAtHTdRa20mQDkVsjaCee3O_o6yWcTbOdYRE7lg8P1kKxW6hhqTWW8XA5Pe3Vd5_jJ5SuubLRSnFewSTs_Tdt1St9wMxiLn0Lmf1ZkLB2GRJFVo'}rsp = requests.get('http://controller:8774/v2.1/flavors', headers=headers)for i in rsp.json()['flavors']:    if 'pvm_flavor' == i['name']:        requests.delete('http://controller:8774/v2.1/flavors/{}'.format(i['id']), headers=headers)data = {'flavor': {'name': 'pvm_flavor', 'vcpus': '1', 'ram': '1024', 'disk': '20', 'id': '9999'}}rsp = requests.post('http://controller:8774/v2.1/flavors', headers=headers, data=json.dumps(data))print(rsp.json())with open('flavor.json', 'w') as outfile:    json.dump(rsp.json(), outfile, indent=4)print('云主机类型创建成功')[root@python ~]# python3 create_flavor.py 云主机类型创建成功```### 3、使用python调用api实现创建镜像(2分)在自行搭建的OpenStack私有云平台或提供的all-in-one平台上。在controller节点的/root目录下创建create\_image.py文件,编写python代码对接OpenStack API,完成镜像的上传。要求在OpenStack私有云平台中上传镜像cirros-0.3.4-x86\_64-disk.img,名字为pvm\_image,disk\_format为qcow2,container\_format为bare。执行完代码要求输出“镜像创建成功,id为:xxxxxx”。(如果存在同名镜像,代码中需先进行删除操作)根据上述要求编写python代码,完成后,将controller节点的IP地址,用户名和密码提交。(考试系统会连接到你的controller节点,去执行python脚本,请准备好运行的环境,以便考试系统访问)```python[root@python ~]# vi create_image.pyimport requests, jsonurl = ':5000/v3/auth/tokens'headers = {'Content-Type': 'application/json', 'X-Auth-Token': 'gAAAAABj2joGX3RTX6Rl0_NALfR2DpIELdR7Ic7JYS_DA9FZ23HZqpy6Hewo9nBGTJxNQnt5a_wnUKMVCYUrpZcQIQ3-RHlYAL8H6fkK0iSwCRkLLcgCLq923XOyn_8QJBVTeY-fVLZXPupWzvM2IQfaNHm0x-MfIO3QWzFeArj6-guaK3Id_eo'}rsp = requests.get('http://controller:9292/v2/images', headers=headers)for i in rsp.json()['images']:    if 'pvm_image' == i['name']:        requests.delete('http://controller:9292/v2/images/{}'.format(i['id']), headers=headers)data = {'name': 'pvm_image', 'disk_format': 'qcow2', 'container_format': 'bare'}rsp = requests.post('http://controller:9292/v2/images', headers=headers, data=json.dumps(data))image_id = rsp.json()['id']headers['Content-Type'] = 'application/octet-stream'print(rsp.json())with open('image.json', 'w') as outfile:  json.dump(rsp.json(), outfile, indent=4)data = open('/root/cirros-0.3.4-x86_64-disk.img', 'rb')rsp = requests.put('http://controller:9292/v2/images/{}/file'.format(image_id), headers=headers, data=data)print('镜像创建成功,id为:{}'.format(image_id))```### 4、使用python调用 api 实现创建网络(2分)在自行搭建的OpenStack 私有云平台或提供的all-in-one 平台上。在controller 节点的/root 目录下创建 create\_network.py 文件,编写 python 代码对接 OpenStack API,完成网络的创建。要求:(1)为平台创建内部网络 pvm\_int,子网名称为 pvm\_intsubnet;(2)设置云主机网络子网 IP 网段为 192.168.x.0/24(其中 x 是考位号),网关为 192.168.x.1(如果存在同名内网,代码中需先进行删除操作)。执行完代码要求输出“网络创建成功”。根据上述要求编写python代码,完成后,将controller节点的IP地址,用户名和密码提交。(考试系统会连接到你的controller节点,去执行python脚本,请准备好运行的环境,以便考试系统访问)```python[root@python ~]# vi create_network.py#!/usr/bin/python3.6import requests,jsonurl = 'http://192.168.100.225:5000/v3/auth/tokens'headers = {'Content-Type':'application/json','X-Auth-Token':''}rsp = requests.get('http://192.168.100.225:9696/v2.0/networks', headers=headers)for i in rsp.json()['networks']:  if 'pvm_int' == i['name']:    rsp = requests.delete('http://192.168.100.225:9696/v2.0/networks/{}'.format(i['id'], headers=headers))data = {'network':{'name':'pvm_int'}}rsp = requests.post('http://192.168.100.225:9696/v2.0/networks', headers=headers, data=json.dumps(data))network_id = rsp.json()['network']['id']data = {'subnet':{'name':'pvm_intsubnet','network_id':network_id,'cidr':'192.168.20.0/24','ip_version':'4'}}rsp = requests.post('http://192.168.100.225:9696/v2.0/subnets', headers=headers, data=json.dumps(data))print('网络创建成功')  ```### 5、使用python调用api实现创建云主机(2分)在自行搭建的OpenStack 私有云平台或提供的all-in-one 平台上。在controller 节点的/root 目录下创建 create\_vm.py 文件,编写 python 代码对接 OpenStack API,完成云主机的创建。要求使用 pvm\_image、pvm\_flavor、pvm\_int 创建 1 台云主机 pvm1(如果存在同名虚拟主机,代码中需先进行删除操作)。执行完代码要求输出“创建云主机成功”。根据上述要求编写python代码,完成后,将controller节点的IP地址,用户名和密码提交。(考试系统会连接到你的controller节点,去执行python脚本,请准备好运行的环境,以便考试系统访问)```python[root@controller ~]# vi create_vm.pyimport requests, json# /auth/tokensheaders = {'Content-Type': 'application/json', 'X-Auth-Token': ''}rsp = requests.get('http://controller:8774/v2.1/servers', headers=headers)for i in rsp.json()['servers']:    if 'pvm1' == i['name']:        requests.delete('http://controller:8774/v2.1/servers/{}'.format(i['id']), headers=headers)rsp = requests.get('http://controller:9292/v2/images', headers=headers)for i in rsp.json()['images']:    if 'pvm_image' == i['name']:        image = i['id']rsp = requests.get('http://controller:8774/v2.1/flavors', headers=headers)for i in rsp.json()['flavors']:    if 'pvm_flavor' == i['name']:        flavor = i['id']rsp = requests.get('http://controller:9696/v2.0/networks', headers=headers)for i in rsp.json()['networks']:    if 'pvm_int' == i['name']:        network = i['id']data = {'server': {'name': 'pvm1', 'imageRef': image, 'networks': [{'uuid':network}], 'flavorRef': flavor}}rsp = requests.post('http://controller:8774/v2/servers', headers=headers, data=json.dumps(data))print(rsp.json())with open('vm.json', 'w') as outfile:    json.dump(rsp.json(), outfile, indent=4)```### 6、Python 运维开发:云主机类型管理的命令行工具开发(2 分)使用已建好的 OpenStack Python 运维开发环境,在/root 目录下创建 flavor\_manager.py 脚本,完成云主机类型的管 程序支持命令行参数执行。 提示说明:Python 标准库 argparse 模块,可以提供命令行参数的解析。 要求如下:(1)程序支持根据命令行参数,创建 1 个多云主机类型。返回 response。位置参数“create”,表示创建;参数“-n”支持指定 flavor 名称,数据类型为字符串类型;参数“-m”支持指定内存大小,数据类型为 int,单位 M;参数“-v”支持指定虚拟 cpu 个数,数据类型为 int;参数“-d”支持磁盘大小,内存大小类型为 int,单位 G;参数“-id”支持指定 ID,类型为字符串。参考运行实例: python3 flavor\_manager.py create -n flavor\_small -m 1024 -v 1 -d 10 -id 100000(2)程序支持查询目前 admin 账号下所有的云主机类型。位置参数“getall”,表示查询所有云主机类型;查询结果,以 json 格式输出到控制台。参考执行实例如下: python3 flavor\_manager.py getall(3)支持查询给定具体名称的云主机类型查询。位置参数“get”,表示查询1个云主机类型;参数“-id”支持指定 ID 查询,类型为 string。控制台以 json 格式输出创建结果。参考执行实例如下: python3 flavor\_manager.py get -id 100000(4)支持删除指定的 ID 云主机类型。 位置参数“delete”,表示删除一个云主机类型;参数“-id”支持指定 ID 查询,返回 response,控制台输出 response。参考执行实例如下: python3 flavor\_manager.py delete -id 100000```python#!/usr/bin/python3.6import argparse,requests,jsonheaders = {'Content-Type':'application/json','X-Auth-Token':''}parser = argparse.ArgumentParser()parser.add_argument('command',help='Resource command name',type=str)parser.add_argument('-n', '--name',help='指定 flavor 名称',type=str)parser.add_argument('-m', '--memory',help='指定内存大小,单位 M',type=int)parser.add_argument('-v', '--vcpu',help='指定虚拟 cpu 个数',type=int)parser.add_argument('-d', '--disk',help='指定磁盘大小,单位 G',type=int)parser.add_argument('-id', '--id',help='指定 ID',type=str)args = parser.parse_args()if args.command:    if args.command == "create":        data = {'flavor': {'name': args.name, 'vcpus': args.vcpu, 'ram': args.memory, 'disk': args.disk, 'id': args.id}}        rsp = requests.post('http://192.168.100.34:8774/v2.1/flavors', headers=headers, data=json.dumps(data))        print(rsp.json())    elif args.command == "getall":        rsp = requests.get('http://192.168.100.34:8774/v2.1/flavors', headers=headers)        print(rsp.json())    elif args.command == "get":        rsp = requests.get('http://192.168.100.34:8774/v2.1/flavors/{}'.format(args.id), headers=headers)        print(rsp.json())    elif args.command == "delete":        rsp = requests.delete('http://192.168.100.34:8774/v2.1/flavors/{}'.format(args.id), headers=headers)        if rsp.status_code == 202:            print(rsp.status_code)```### 7、Python 运维开发:用户管理的命令行工具开发(2分)使用已建好的 OpenStack Python 运维开发环境,在/root 目录下创建 user\_manager.py 脚 本,完成用户管理功能开发,user\_manager.py 程序支持命令行带参数执行。提示说明:Python 标准库 argparse 模块,可以提供命令行参数的解析。(1)程序支持根据命令行参数,创建 1 个用户。 位置参数“create”,表示创建;参数“-i 或--input”,格式为 json 格式文本用户数据。查询结果,以 json 格式输出到控制台。参考执行实例如下: python3 user\_manager.py create --input '{ "name": "user01", "password": "000000", "description": "description" } '(2)支持查询给定具体名称的用户查询。 位置参数“get”,表示查询 1 个用户;参数“-n 或 --name”支持指定名称查询,类型为 string。参数“-o 或 output”支持查询该用户信息输出到文件,格式为 json 格式。参考执行实例如下:python3 user\_manager.py get --name user01 -o user.json(3)程序支持查询目前 admin 账号下所有的用户。位置参数“getall”,表示查询所有用户;参数“-o 或--output”支持输出到文件,格式为 yaml 格式。参考执行实例如下:python3 user\_manager.py getall -o openstack\_all\_user.yaml(4)支持删除指定的名称的用户。位置参数“delete”,表示删除一个用户;返回 response,通过控制台输出。参数“-n 或--name”支持指定名称查询,类型为 string。参考执行实例如下: python3 user\_manager.py delete --name user01```python#!/usr/bin/python3.6import argparse,requests,json,yamlheaders = {'Content-Type':'application/json','X-Auth-Token':''}parser = argparse.ArgumentParser()parser.add_argument('command',help='Resource command name',type=str)parser.add_argument('-n', '--name',help='指定名称查询',type=str)parser.add_argument('-i', '--input',help='格式为 json 格式文本用户数据',type=str)parser.add_argument('-o', '--output',help='查询该用户信息输出到文件,格式为 json 格式',type=str)args = parser.parse_args()rsp = requests.get('http://192.168.100.34:5000/v3/users', headers=headers)for i in rsp.json()['users']:    if args.name == i['name']:        user_id = i['id']if args.command:    if args.command == "create":        data = {'user':json.loads(args.input)}        rsp = requests.post('http://192.168.100.34:5000/v3/users', headers=headers, data=json.dumps(data))        print(rsp.json())    elif args.command == "getall":        with open(args.output, 'w') as yamlfile:            yaml.dump(rsp.json(), yamlfile)    elif args.command == "get":        rsp = requests.get('http://192.168.100.34:5000/v3/users/{}'.format(user_id), headers=headers)        with open(args.output, 'w') as jsonfile:            json.dump(rsp.json(), jsonfile, indent=4)    elif args.command == "delete":        rsp = requests.delete('http://192.168.100.34:5000/v3/users/{}'.format(user_id), headers=headers)        if rsp.status_code == 204:            print(rsp.status_code)```### 8、OpenStack Python运维脚本开发:使用SDK方式创建镜像(3分)在提供的OpenStack私有云平台上,使用T版本的openstack-python-dev镜像创建1台云主机,云主机类型使用4vCPU/12G内存/100G硬盘。该主机中已经默认安装了所需的开发环境,登录默认账号密码为“root/Abc\@1234”。使用“openstacksdk”python库,在/root目录下创建sdk\_manager\_image.py文件,编写python代码,代码实现以下任务:(1)先检查OpenStack镜像“cirros-image”名称是否存在?如果存在,先完成删除该镜像;(2)如果不存在“cirros-image”名称,使用文件服务器上“cirros-0.3.4-x86\_64-disk.img”文件创建镜像;(3)创建完成后,查询该镜像信息,查询的body部分内容控制台输出,同时json格式的输出到文件当前目录下的image\_demo.json文件中,json格式要求indent=4。编写完成后,提交allinone节点的用户名、密码和IP地址到答题框。```python[root@controller ~]# vi   #!/usr/bin/python3.6import openstack,jsonconn = openstack.connect(  auth_url = 'http://192.168.100.37:5000/v3',  user_domain_name = 'demo',  username = 'admin',  password = '000000')image = conn.image.find_image('cirros-image')if image:  conn.image.delete_image('cirros-image')uri = '/root/cirros-0.3.4-x86_64-disk.img'data = {'name':'cirros-image','disk_format':'qcow2','container_format':'bare'}create = conn.image.create_image(**data)image = conn.image.import_image(create, method='web-download', uri=uri)get_image = image = conn.image.find_image('cirros-image')print(get_image)  with open('image_demo.json', 'w') as outfile:  json.dump(get_image, outfile, indent=4)```### 9、OpenStack Python运维脚本开发:使用SDK方式创建云主机类型(3分)在提供的OpenStack私有云平台上,使用T版本的openstack-python-dev镜像创建1台云主机,云主机类型使用4vCPU/12G内存/100G硬盘。该主机中已经默认安装了所需的开发环境,登录默认账号密码为“root/Abc\@1234”。使用“openstacksdk”python库,在/root目录下创建sdk\_manager\_flavor.py文件,编写python代码,代码实现以下任务:(1)先检查OpenStack云主机类型“flavor\_demo”名称是否存在?如果存在,先完成删除该云主机类型;(2)如果不存在“flavor\_demo”名称,创建该云主机类型,名字为flavor\_demo、vcpu为1个、内存为1024m、硬盘为20G、ID为9999。(3)创建完成后,查询该云主机类型信息,查询的body部分内容控制台输出,同时json格式的输出到文件当前目录下的flavor\_demo.json文件中,json格式要求indent=4。编写完成后,提交allinone节点的用户名、密码和IP地址到答题框。```python#!/usr/bin/python3.6import openstack,jsonconn = openstack.connect(  auth_url = 'http://192.168.100.37:5000/v3',  user_domain_name = 'demo',  username = 'admin',  password = '000000')nova = conn.compute.find_flavor('flavor_demo')if nova:  conn.compute.delete_flavor(nova.id)data = {'name':'flavor_demo','vcpus':'1','ram':'1024','disk':'20','id':'9999'}flavor = conn.compute.create_flavor(**data)print(flavor)with open('flavor_demo.json','w') as outfile:  json.dump(flavor,outfile,indent=4)```### 10、OpenStack Python运维脚本开发:使用SDK方式创建网络(3分)在提供的OpenStack私有云平台上,使用T版本的openstack-python-dev镜像创建1台云主机,云主机类型使用4vCPU/12G内存/100G硬盘。该主机中已经默认安装了所需的开发环境,登录默认账号密码为“root/Abc\@1234”。使用“openstacksdk”python库,在/root目录下创建sdk\_manager\_net.py文件,编写python代码,代码实现以下任务:先检查OpenStack网络“net\_demo”名称是否存在?如果存在,先完成删除该网络;(2)如果不存在“net\_demo”名称,创建该网络,内部网络名称为net\_demo,子网名称为 subnet\_demo;设置云主机网络子网 IP 网段为 192.168.x.0/24(其中 x 是考位号),网关为 192.168.x.1;(3)创建完成后,查询该网络信息,查询的body部分内容控制台输出,同时json格式的输出到文件当前目录下的net\_demo.json文件中,json格式要求indent=4。编写完成后,提交allinone节点的用户名、密码和IP地址到答题框。```python#!/usr/bin/python3.6import openstack,jsonconn = openstack.connect(  auth_url = 'http://192.168.100.37:5000/v3/',  user_domain_name = 'demo',  username = 'admin',  password = '000000')net = conn.network.find_network('net_demo')if net:  conn.network.delete_network(net.id)network = conn.network.create_network(name='net_demo')data = {'name':'subnet_demo','cidr':'192.168.3.0/24','gateway_ip':'192.168.3.1','network_id':network.id,'ip_version':'4'}subnet = conn.network.create_subnet(**data)get_network = conn.network.find_network('net_demo')get_subnet = conn.network.find_subnet('subnet_demo')print(get_network,get_subnet)with open('net_demo.json','w') as outfile:  json.dump(get_network,outfile,indent=4)```### 11、OpenStack Python运维脚本开发:使用SDK方式创建云主机(3分)实操题】Python运维开发:基于Openstack Python SDK实现云主机创建(1分)使用已建好的OpenStack Python运维开发环境,在/root目录下创建sdk_server_manager.py脚本,使用python-openstacksdk Python模块,完成云主机的创建和查询。创建之前查询是否存在“同名云主机”,如果存在先删除该镜像。(1)创建1台云主机:云主机信息如下:云主机名称如下:server001镜像文件:cirros-0.3.4-x86_64-disk.img云主机类型:m1.tiny网络等必要信息自己补充。(2)查询云主机:查询云主机server001的详细信息,并以json格式文本输出到控制台。完成后提交OpenStack Python运维开发环境 Controller节点的IP地址,用户名和密码提交。```python#!/usr/bin/python3.6import openstack,jsonconn = openstack.connect(  auth_url = 'http://192.168.100.37:5000/v3',  user_domain_name = 'demo',  username = 'admin',  password = '000000')server = conn.compute.find_server('vm_demo')if server:  conn.compute.delete_server(server.id)image = conn.image.find_image('cirros-image')flavor = conn.compute.find_flavor('flavor_demo')network = conn.network.find_network('net_demo')data = {'name':'vm_demo','image_id':image.id,'flavor_id':flavor.id,'networks':[{'uuid':network.id}]}vm = conn.compute.create_server(**data)print(vm)with open('vm_demo.json', 'w') as outfile:  json.dump(vm,outfile,indent=4)  ---------------------------------import openstack, jsonconn = openstack.connect(    auth_url='http://controller:5000/v3',    user_domain_name='demo',    username='admin',    password='000000')server = conn.compute.find_server('server002')if server:    conn.compute.delete_server(server.id)image = conn.image.find_image('cirros-1')flavor = conn.compute.find_flavor('m1.tiny')network = conn.network.find_network('nettt')data = {'name': 'server002', 'image_id': image.id, 'flavor_id': flavor.id, 'networks': [{'uuid': network.id}], 'description': 'server001}vm = conn.compute.creata_server(**data)print(vm)#conn.network.delete_network#conn.network.create_subnet#conn.neconn.image.create_image#conn.image.import_image```## 五、Ansible脚本开发### 1、Ansible部署MariaDB集群(2分)使用OpenStack私有云平台,创建4台系统为centos7.9的云主机,其中一台作为Ansible的母机并命名为ansible,另外三台云主机命名为node1、node2、node3;使用这一台母机,编写Ansible脚本(在/root目录下创建example目录作为Ansible工作目录,部署的入口文件命名为cscc\_install.yaml),对其他三台云主机进行安装高可用数据库集群(MariaDB\_Galera\_cluster,数据库密码设置为123456)的操作(所需的安装包在HTTP服务中)。完成后提交Ansible节点的用户名、密码和IP地址到答题框。(考试系统会连接到你的Ansible节点,去执行Ansible脚本,请准备好Ansible运行环境,以便考试系统访问)```yaml[root@ansible ~]# mkdir example[root@ansible ~]# vi /etc/hosts192.168.100.180 ansible192.168.100.118 node1192.168.100.49 node2192.168.100.144 node3[root@ansible ~]# tar -zxvf ansible.tar.gz -C /opt/[root@ansible ~]# tar -zxvf gpmall-single.tar.gz -C /opt/[root@ansible ~]# rm -rf /etc/yum.repos.d/*[root@ansible ~]# vi /etc/yum.repos.d/local.repo[centos]name=centosbaseurl=http://192.168.100.10/image/iaas/centos7.9/gpgcheck=0enabled=1[ansible]name=ansiblebaseurl=file:///opt/ansiblegpgcheck=0enabled=1[gpmall]name=gpmallbaseurl=file:///opt/gpmall-single/gpmall-repogpgcheck=0enabled=1[root@ansible ~]# yum install ansible mariadb mariadb-server vim -y[root@ansible ~]# vim /etc/ansible/hosts[hosts]node1 ssh_pass=000000node2 ssh_pass=000000node3 ssh_pass=000000[root@ansible ~]# vi /etc/ansible/ansible.cfghost_key_checking = Falsecommand_warnings = False[root@ansible ~]# ansible all -m pingnode1 | SUCCESS => {    "ansible_facts": {        "discovered_interpreter_python": "/usr/bin/python"    },    "changed": false,    "ping": "pong"}node2 | SUCCESS => {    "ansible_facts": {        "discovered_interpreter_python": "/usr/bin/python"    },    "changed": false,    "ping": "pong"}node3 | SUCCESS => {    "ansible_facts": {        "discovered_interpreter_python": "/usr/bin/python"    },    "changed": false,    "ping": "pong"}[root@ansible ~]# mkdir example[root@ansible ~]# cd example/[root@ansible example]# mkdir roles/mariadb/tasks -p[root@ansible example]# touch cscc_install.yaml[root@ansible example]# vim cscc_install.yaml---- hosts: hosts  remote_user: root  roles:    - mariadb[root@ansible example]# vim roles/mariadb/tasks/main.yaml- name: cp hosts  copy: src=/root/hosts dest=/etc- name: cp  synchronize: src=/root/gpmall-repo dest=/root- name: rm repo  shell: rm -rf /etc/yum.repos.d/*- name: cp repo  shell: src=/root/local.repo dest=/etc/yum.repos.d/- name: install mariadb  shell: yum install mariadb mariadb-server -y- name: start mariadb  shell: systemctl restart mariadb- name: password  shell: mysqladmin password 123456| cat- name: grant all  shell: mysql -uroot -p123456 -e "grant all privileges on *.* to 'root'@'%' identified by '123456';"- name: stop mariadb  shell: systemctl stop mariadb- name: cp config  copy: src=/root/server.cng dest=/etc/my.cnf.d/- name: start mariadb  shell: galera_new_cluster  when: ansible_hostname=='node1'- name: start mariadb  shell: systemctl restart mariadb  when: ansible_hostname=='node2'- name: start mariadb  shell: systemctl restart mariadb  when: ansible_hostname=='node3'[root@ansible ~]# cp /etc/my.cnf.d/server.cnf .[root@ansible ~]# vim server.cnf[galera]  # 取消注释# Mandatory settingswsrep_on=ONwsrep_provider=/usr/lib64/galera/libgalera_smm.sowsrep_cluster_address=gcomm://node1,node2,node3binlog_format=rowdefault_storage_engine=InnoDBinnodb_autoinc_lock_mode=2## Allow server to accept connections on all interfaces.#bind-address=0.0.0.0[root@ansible ~]# cd example/[root@ansible example]# ansible-playbook cscc_install.yaml```### 2、【实操题】Ansible服务部署:部署FTP服务(1分)使用赛项提供的 OpenStack 私有云平台,创建 2 台系统为 centos7.5 的云主机,其中一台作为 ansible 的母机并命名为 ansible,另外一台云主机命名为 node1,通过 http 服务中的ansible.tar.gz 软件包在 ansible 节点安装 ansible 服务;并用这台母机,编写 ansible 脚本(在/root 目录下创建 ansible\_ftp 目录作为 ansible 工作目录, 部署的入口文件命名为install\_ftp.yaml)。install\_ftp.yaml 文件中需要完成的内容为(1) yaml 中被执行节点为 node1,执行者为 root;(2) 使用 copy 模块将 ansible 节点的 local.repo 传到 node 节点;(local.repo 用于配置node 节点的 yum 源,可自行创建)(3) 使用 yum 模块安装 ftp 服务;(4) 使用 service 模块启动 ftp 服务。完成后提交ansible节点的用户名、密码和IP地址到答题框。(考试系统会连接到ansible节点,执行ansible脚本,准备好环境,以便考试系统访问)```yaml[root@ansible ~]# vi /etc/hosts192.168.100.37 ansible192.168.100.33 node1[root@ansible ~]# tar -zxvf ansible.tar.gz -C /opt/[root@ansible ~]# rm -rf /etc/yum.repos.d/*[root@ansible ~]# vi /etc/yum.repos.d/local.repo[centos]name=centosbaseurl=http://192.168.100.10/image/iaas/centos7.9/gpgcheck=0enabled=1[ansible]name=ansiblebaseurl=file:///opt/ansiblegpgcheck=0enabled=1[root@ansible ~]# cp /etc/yum.repos.d/local.repo .[root@ansible ~]# yum install ansible vim -y[root@ansible ~]# vi /etc/ansible/hosts[hosts]node1 ssh_pass=000000[root@ansible ~]# vim /etc/ansible/ansible.cfghost_key_checking = False # 取消注释command_warnings = False  # 取消注释[root@ansible ~]# ssh-keygen[root@ansible ~]# ssh-copy-id root@node1[root@ansible ~]# mkdir /root/ansible_ftp[root@ansible ~]# cd ansible_ftp/[root@ansible ansible_ftp]# vim install_ftp.yaml---- hosts: node1  remote_user: root  tasks:    - name: rm repo      shell: rm -rf /etc/yum.repos.d/*    - name: cp repo      copy: src=/root/local.repo dest=/etc/yum.repos.d/    - name: install      yum: name=vsftpd state=present    - name: start ftp      service: name=vsftpd state=started[root@ansible ansible_ftp]# ansible-playbook install_ftp.yaml```### 4、Ansible服务部署:部署zabbix服务(2分)使用赛项提供的 OpenStack 私有云平台,创建 2 台系统为 centos7.5 的云主机,其中一台作为 Ansible 的母机并命名为 ansible,另一台云主机命名为 node,通过 http 服务中的 ansible.tar.gz 软件包在 ansible 节点安装 Ansible 服务;并用这台母机,补全 Ansible 脚本(在 HTTP 中下载 install\_zabbix.tar.gz 并解压到/root 目录下),补全 Ansible 脚本使得执行 install\_zabbix.yaml 可以在 node 节点上完成 zabbix 服务的安装。完成后提交ansible节点的用户名、密码和IP地址到答题框。(考试系统会连接到ansible节点,执行ansible脚本,准备好环境,以便考试系统访问)```yaml[root@ansible ~]# tar -zxvf ansible.tar.gz -C /opt/[root@ansible ~]# vim /etc/hosts192.1678.100.36 ansible192.168.100.17 node[root@ansible ~]# tar -zxvf install_zabbix.tar.gz[root@ansible ~]# rm -rf /etc/yum.repos.d/*[root@ansible ~]# vim /etc/yum.repos.d/local.repo[ansible]name=ansiblebaseurl=file:///opt/ansiblegpgcheck=0enabled=1[root@ansible ~]# yum install ansible -y[root@ansible ~]# vim /etc/ansible/hosts[hosts]node ssh_pass=000000[root@ansible ~]# vim /etc/ansible/ansible.cfghost_key_checking = False #取消注释command_warnings = False[root@ansible ~]# cd install_zabbix[root@ansible install_zabbix]# cd group_vars/[root@ansible group_vars]# vim allDB_PASS: '000000'DB_HOST:[root@ansible group_vars]# cd ..[root@ansible install_zabbix]# vim install_zabbix.yaml---- hosts: node  remote_user: root  roles:    - zabbix[root@ansible install_zabbix]# cd roles/zabbix/files/[root@ansible files]# vim yum.repo[zabbix]name=zabbixbaseurl=file:///opt/zabbixgpgcheck=0enabled=1[root@ansible install_zabbix]# vim tasks/main.yaml  - name: mv yum config    shell: mv /etc/yum.repos.d/* /root/  - name: copy repo    copy: src=yum.repo dest=/etc/yum.repos.d/yum.repo  - name: Copy Repo Tar    copy: src=zabbix.tar.gz dest=/opt  - name: Decompression Package    shell: tar -zxvf /opt/zabbix.tar.gz -C /opt  - name: Mariadb Create zabbix    shell: mysql -uroot -p{{ DB_PASS }} -e "create database zabbix character set utf8 collate utf8_bin;"    ignore_errors: yes      - name: privileages mariadb    shell: "{{ item }}"    with_items:      - mysql -uroot -p{{ DB_PASS }} -e "grant all privileges on zabbix.* to zabbix@'%' identified by 'zabbix';"      - mysql -uroot -p{{ DB_PASS }} -e "grant all privileges on zabbix.* to zabbix@localhost identified by zabbix';"      - name: Install zabbix    yum:      name:        - zabbix-server-mysql        - zabbix-web-mysql        - zabbix-agent        - trousers  - name: Config zabbix_server.conf    template: src=zabbix_server.conf.j2 dest=/etc/zabbix/zabbix_server.conf ```### 5、Ansible服务部署:部署ELK集群服务(2分)使用赛项提供的OpenStack私有云平台,创建三台CentOS7.9系统的云主机分别命名为elk-1、elk-2和elk-3,Ansible主机可以使用上一题的环境。要求Ansible节点编写剧本,执行Ansible剧本可以在这三个节点部署ELK集群服务(在/root目录下创建install\_elk目录作为ansible工作目录,部署的入口文件命名为install_elk.yaml)。具体要求为三个节点均安装Elasticserach服务并配置为Elasticserach集群;kibana安装在第一个节点;Logstash安装在第二个节点。(需要用到的软件包在HTTP服务下)完成后提交ansible节点的用户名、密码和IP地址到答题框。(考试系统会连接到ansible节点,执行ansible脚本,准备好环境,以便考试系统访问)```plain  评分:ansible elk-1 -a "netstat -ntlp"5601||9300||9200cd /root/install_elk && ansible-playbook install_elk.yammlfailed=0||failed=0||failed=0``` 
  • [技术干货] KFC运维4
    任务 2  私有云服务运维(15 分)1.2.1      Glance 开放镜像权限将指定镜像在指定项目进行共享使用。[root@controller ~]# glance image-create --name glance-cirros --disk-format qcow2 --container-format bare <cirros-0.3.4-x86_64-disk.img +------------------+----------------------------------------------------------------------------------+| Property         | Value                                                                            |+------------------+----------------------------------------------------------------------------------+| checksum         | ee1eca47dc88f4879d8a229cc70a07c6                                                 || container_format | bare                                                                             || created_at       | 2024-02-22T14:09:49Z                                                             || disk_format      | qcow2                                                                            || id               | 5c2a66a5-dcbb-4d2d-b35d-1264d9bfea69                                             || min_disk         | 0                                                                                || min_ram          | 0                                                                                || name             | glance-cirros                                                                    || os_hash_algo     | sha512                                                                           || os_hash_value    | 1b03ca1bc3fafe448b90583c12f367949f8b0e665685979d95b004e48574b953316799e23240f4f7 ||                  | 39d1b5eb4c4ca24d38fdc6f4f9d8247a2bc64db25d6bbdb2                                 || os_hidden        | False                                                                            || owner            | 7744815b067c43db86fa13120c043b92                                                 || protected        | False                                                                            || size             | 13287936                                                                         || status           | active                                                                           || tags             | []                                                                               || updated_at       | 2024-02-22T14:09:50Z                                                             || virtual_size     | Not available                                                                    || visibility       | shared                                                                           |+------------------+----------------------------------------------------------------------------------+[root@controller ~]# openstack project list+----------------------------------+---------+| ID                               | Name    |+----------------------------------+---------+| 7744815b067c43db86fa13120c043b92 | admin   || 8358c987bb8f4d9fa09485f94705f522 | demo    || f71b1bfdbf3f4a639257a3948bec2cf8 | service |+----------------------------------+---------+[root@controller ~]# openstack image list+--------------------------------------+---------------+--------+| ID                                   | Name          | Status |+--------------------------------------+---------------+--------+| 2e2515ad-046f-4494-8d47-f9c943febc63 | cirros        | active || 5c2a66a5-dcbb-4d2d-b35d-1264d9bfea69 | glance-cirros | active |+--------------------------------------+---------------+--------+# 镜像id在前(此处通过命令将glance-cirros镜像指定demo项目可以进行共享使用)[root@controller ~]# glance member-create 5c2a66a5-dcbb-4d2d-b35d-1264d9bfea69 8358c987bb8f4d9fa09485f94705f522 +--------------------------------------+----------------------------------+---------+| Image ID                             | Member ID                        | Status  |+--------------------------------------+----------------------------------+---------+| 5c2a66a5-dcbb-4d2d-b35d-1264d9bfea69 | 8358c987bb8f4d9fa09485f94705f522 | pending |+--------------------------------------+----------------------------------+---------+[root@controller ~]# glance member-update 5c2a66a5-dcbb-4d2d-b35d-1264d9bfea69 8358c987bb8f4d9fa09485f94705f522 accepted+--------------------------------------+----------------------------------+----------+| Image ID                             | Member ID                        | Status   |+--------------------------------------+----------------------------------+----------+| 5c2a66a5-dcbb-4d2d-b35d-1264d9bfea69 | 8358c987bb8f4d9fa09485f94705f522 | accepted |+--------------------------------------+----------------------------------+----------+1.2.2      Glance  镜像转换使用 CentOS 的镜像,将该镜像转换为 RAW 格式。[root@controller ~]# qemu-img convert -f qcow2 -O raw CentOS7.5-compress.qcow2 CentOS7.5-compress.raw1.2.3      Glance 镜像存储限制使用自行搭建的 OpenStack 平台。请修改 Glance 后端配置文件,修改用户的镜像存储配额限制。[root@controller ~]# vim /etc/glance/glance-api.conf #user_storgage_quota = 0修改为user_storgage_quota = 200G[root@controller ~]# systemctl restart openstack-glance*1.2.4      Nova 清除缓存在 OpenStack 平台上,修改相关配置,让长时间不用的镜像缓存在过一定的时间后会被自动删除。[root@controller ~]# vim /etc/nova/nova.confremove_unused_base_images=true # 取消注释1.2.5      使用 Heat 模板创建网络在自行搭建的 OpenStack 私有云平台上,编写 Heat 模板文件,完成网络的创建。[root@controller ~]# vim create_net.yamlheat_template_version: 2015-10-15resources:  network:    type: OS::Neutron::Net    properties:      name: Heat-Network      admin_state_up: true  # 管理员状态      shared: false  subnet:    type: OS::Neutron::Subnet    properties:      name: Heat-Subnet      network_id:        get_resource: network      cidr: 10.20.2.0/24  # 子网网段      allocation_pools:  # 分配池      - start: 10.20.2.20  # 开始        end: 10.20.2.100  # 结束      enable_dhcp: true  # 启用dhcp [root@controller ~]# openstack stack create -t create_net.yaml test [root@controller ~]# openstack stack delete test -y1.2.6      部署 NFS 应用服务使用 OpenStack 私有云平台,创建一台云主机,安装 NFS 服务,然后对接 Glance后端存储。 [root@compute ~]# systemctl start rpcbind[root@compute ~]# systemctl start nfs[root@compute ~]# vi /etc/exports/mnt/test *(rw,sync,no_root_squash) [root@compute ~]# mkdir /mnt/test[root@compute ~]# exportfs -r[root@compute ~]# showmount -e localhostExport list for localhost:/mnt/test                                                      */var/lib/manila/mnt/share-b7f82a14-9804-46c3-bcfa-6accbe65ae20 127.0.0.0/24,192.168.100.142[root@controller ~]# showmount -e 192.168.100.226Export list for 192.168.100.226:/mnt/test                                                      */var/lib/manila/mnt/share-b7f82a14-9804-46c3-bcfa-6accbe65ae20 127.0.0.0/24,192.168.100.142[root@controller ~]# mount -t nfs 192.168.100.226:/mnt/test /var/lib/glance/images/[root@controller ~]# df -h192.168.100.226:/mnt/test     100G  2.0G   99G   2% /var/lib/glance/images[root@controller ~]# chown glance:glance /var/lib/glance/images/[root@controller ~]# ls -l /var/lib/glance/total 0drwxr-xr-x. 2 glance glance 6 Sep 23 08:40 images1.2.7      部署 Redis 集群部署 Redis 集群, Redis 的一主二从三哨兵架构。# 修改redis1节点配置文件[root@redis1 ~]# vi /etc/redis.conf# bind 127.0.0.1   # 加上注释protected-mode no  # 修改为nodaemonize yes      # 改为yes# requirepass foobared 下方添加:requirepass "123456"# masterauth <master-password> 下方添加:masterauth "123456"[root@redis1 ~]# systemctl restart redis# 修改redis2、redis3节点[root@redis2 ~]# vi /etc/redis.conf# bind 127.0.0.1 # 加注释protected-mode no # 改为nodaemonize yes # 改为yes# requirepass foobared 下方添加:requirepass "123456"# masterauth <master-password> 下方添加:masterauth "123456"# slaveof <masterip> <masterport> 下方添加:slaveof 192.168.100.115 6379[root@redis2 ~]# systemctl restart redisredis3 同理[root@redis1 ~]# systemctl restart redis             [root@redis1 ~]# redis-cli -a 123456 info replicationconnected_slaves:2[root@redis1 ~]# vi /etc/sentinel.confport 26379protected-mode no  # 保护模式daemonize yessentinel monitor redis1 192.168.100.115 6379 2sentinel auth-pass redis1 123456[root@redis2 ~]# vi /etc/sentinel.confport 26380protected-mode nodaemonize yessentinel monitor redis1 192.168.100.115 6379 2sentinel auth-pass redis1 123456[root@redis3 ~]# vi /etc/sentinel.confport 26381protected-mode nodaemonize yes sentinel monitor redis1 192.168.100.115 6379 2sentinel auth-pass redis1 123456# 启动哨兵模式[root@redis1 ~]# redis-server /etc/sentinel.conf --sentinel[root@redis2 ~]# redis-server /etc/sentinel.conf --sentinel[root@redis3 ~]# redis-server /etc/sentinel.conf --sentinel1.2.8      Redis AOF 调优修改在 Redis 相关配置,避免 AOF 文件过大, Redis 会进行 AOF 重写。[root@redis1 ~] vi /etc/redisappendonly no 改为appendonly yes   #启动AOFno-appendfsync_on-rewirte = no  #关闭重写auto-aof-rewrite-min-size 100 文件达到100%重写(2倍)auto-aof-rewrite-min-size 64mb  最小文件64m 开始重写
  • [技术干货] KFC运维3
    任务 2  私有云服务运维(15 分)1.2.1     Raid 磁盘阵列管理在云主机上对云硬盘进行操作, 先进行分区, 然后创建名为/dev/md5、raid 级别为 5 的磁盘阵列加一个热备盘。[root@raid ~]# yum install mdadm[root@raid ~]# fdisk /dev/sdb[root@raid ~]mdadm -Cv /dev/md5 -l5 -n3 /dev/sdb[1-3] --spare-devices=1 /dev/sdb4[root@raid ~]# mdadm -D /dev/md51.2.2     消息队列调优在 OpenStack 私有云平台, 分别通过用户级别、系统级别、配置文件来设置RabbitMQ 服务的最大连接数。[root@controller ~]# vim /etc/security/limits.conf添加:openstack soft nofile 10240openstack hard nofile 10240[root@controller ~]# vim /etc/sysctl.conf添加:fs.file-max = 10240[root@controller ~]# vim /usr/lib/systemd/system/rabbitmq-server.service[Service]添加:LimitNOFILE=10240[root@controller ~]# systemctl daemon-reload[root@controller ~]# systemctl restart rabbitmq-server[root@controller ~]# [rabbitmqctl status[{total_limit,10140},1.2.3     Keystone 优化请修改相关配置,  增加 Keystone 的失效列表缓存时间。[root@controller ~]# vim /etc/keystone/keystone.conf[token]provider = keystone.token.providers.uuid.Providerdriver = keystone.token.persistence.backends.memcache.Token[memcache]servers = localhost:112111.2.4     Glance 镜像压缩在 HTTP 文件服务器中存在一个镜像为 CentOS 的镜像, 请对该镜像进行压缩操作。     [root@controller ~]# qemu-img convert -c -O qcow2 http://192.168.100.91/image/iaas/CentOS7.5-compress.qcow2  /root/chinaskill-js-compress.qcow21.2.5     虚拟机调整 Flavor使用 OpenStack 私有云平台, 请修改相应配置,  实现云主机调整实例大小可以使用。[root@controller ~]# vim /etc/nova/nova.confallow_resize_to_same_host=false改为allow_resize_to_same_host=true1.2.6     Nova 数据库连接调优修改 nova 相关配置文件,修改连接池大小和最大允许超出的连接数。【nova 调优】[root@controller ~]# vi /etc/nova/nova.conf#超售比ram_allocation_ratio = 1.0#默认是1.5,内存分配超售比 disk_allocation_ratio = 1.2#磁盘超售比例,默认是1:1倍 cpu_allocation_ratio = 4.0#cpu超售比,建议超售4倍,vcpu可以比宿主机内核多4倍####################################预留量vcpu_pin_set = 4-%#%是物理机的cpu内核总数来决定的,比如上面这个#预留4个cup防止虚拟机跟组主机抢cpu资源 reserved_host_memory_mb = 4096#预留的内存量,这部分内存不能被虚拟机所使用,单位是m reserved_host_disk_mb = 10240#磁盘预留量,这部分空间不能被虚拟机所使用,单位是m####################################服务下线时间service_down_time=120服务下线时间阈值,单位是秒,如果一个节点上的nova服务超过这个时间没有上报数据库【数据库调优】[root@controller ~]#  vi /etc/my.cnfmax_connections=1500#最大连接(用户)数。每个连接 MySQL 的用户均算作一个连接max_connect_errors=30#最大失败连接限制.1.2.7     使用 Heat 模板创建网络在自行搭建的 OpenStack 私有云平台上,编写 Heat 模板文件,完成网络的创建。[root@controller ~]# vim create_net.yamlheat_template_version: 2015-10-15resources:  network:    type: OS::Neutron::Net    properties:      name: Heat-Network      admin_state_up: true  # 管理员状态      shared: false  subnet:    type: OS::Neutron::Subnet    properties:      name: Heat-Subnet      network_id:        get_resource: network      cidr: 10.20.2.0/24  # 子网网段      allocation_pools:  # 分配池      - start: 10.20.2.20  # 开始        end: 10.20.2.100  # 结束      enable_dhcp: true  # 启用dhcp [root@controller ~]# openstack stack create -t create_net.yaml test [root@controller ~]# openstack stack delete test -y1.2.8    Redis 应用部署使用赛项提供的 OpenStack 私有云平台,创建两台云主机,配置为 redis 的主从架构。申请两台 CentOS7.9 系统的云主机,使用提供的 http 源,在两个节点安装 redis 服务并启动,配置 redis 的访问需要密码,密码设置为 123456。然后将这两个 redis 节点配置为 redis 的主从架构#主节点[root@master ~]# yum install -y redis[root@master ~]# cat /etc/redis.confbind 0.0.0.0protected-mode nomasterauth 123456requirepass 123456appendonly yesdaemonize yes[root@master ~]# systemctl restart redis#从节点[root@node ~]# yum install -y redis[root@node ~]# cat /etc/redis.confbind 0.0.0.0protected-mode no#主节点ipslaveof 192.168.10.10 6379masterauth 123456requirepass 123456appendonly yesdaemonize yes[root@node ~]# systemctl restart redis#验证[root@master ~]# redis-cli -a 123456 127.0.0.1:6379> info Replication# Replicationrole:masterconnected_slaves:1slave0:ip=192.168.10.11,port=6379,state=online,offset=29,lag=0master_repl_offset:29repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:2repl_backlog_histlen:28
  • [技术干货] KFC运维2
    任务 2  私有云服务运维(15 分)1.2.1 Linux  网络命名空间运维创建网络空间,实现网络空间的虚拟网络设备的通信。# 创建test1,test2[root@controller ~]# ip netns add test1[root@controller ~]# ip netns add test2# 开启test1,test2[root@controller ~]# ip netns exec test1 ip link set dev lo up[root@controller ~]# ip netns exec test2 ip link set dev lo up# 创建一对veth[root@controller ~]# ip link add veth-test1 type veth peer name veth-test2# 分配给test1,test2[root@controller ~]# ip link set veth-test1 netns test1[root@controller ~]# ip link set veth-test2 netns test2# 给veth分配ip地址[root@controller ~]# ip netns exec test1 ip addr add 192.168.100.1/24 dev veth-test1[root@controller ~]# ip netns exec test2 ip addr add 192.168.100.2/24 dev veth-test2# 启动veth[root@controller ~]# ip netns exec test1 ip link set dev veth-test1 up[root@controller ~]# ip netns exec test2 ip link set dev veth-test2 up# 在test1命名空间中可以ping通 192.168.1.2# 在test2命名空间中可以ping通 192.168.1.1[root@controller ~]# ip netns exec test1 ping 192.168.100.1[root@controller ~]# ip netns exec test2 ping 192.168.100.21.2.2 Glance 开放镜像权限将镜像指定项目进行共享使用。[root@controller ~]# glance image-create --name glance-cirros --disk-format qcow2 --container-format bare <cirros-0.3.4-x86_64-disk.img [root@controller ~]# openstack project list+----------------------------------+---------+| ID                               | Name    |+----------------------------------+---------+| 7744815b067c43db86fa13120c043b92 | admin   || 8358c987bb8f4d9fa09485f94705f522 | demo    || f71b1bfdbf3f4a639257a3948bec2cf8 | service |+----------------------------------+---------+[root@controller ~]# openstack image list+--------------------------------------+---------------+--------+| ID                                   | Name          | Status |+--------------------------------------+---------------+--------+| 2e2515ad-046f-4494-8d47-f9c943febc63 | cirros        | active || 5c2a66a5-dcbb-4d2d-b35d-1264d9bfea69 | glance-cirros | active |+--------------------------------------+---------------+--------+# 镜像id在前(此处通过命令将glance-cirros镜像指定demo项目可以进行共享使用)[root@controller ~]# glance member-create 5c2a66a5-dcbb-4d2d-b35d-1264d9bfea69 8358c987bb8f4d9fa09485f94705f522 +--------------------------------------+----------------------------------+---------+| Image ID                             | Member ID                        | Status  |+--------------------------------------+----------------------------------+---------+| 5c2a66a5-dcbb-4d2d-b35d-1264d9bfea69 | 8358c987bb8f4d9fa09485f94705f522 | pending |+--------------------------------------+----------------------------------+---------+[root@controller ~]# glance member-update 5c2a66a5-dcbb-4d2d-b35d-1264d9bfea69 8358c987bb8f4d9fa09485f94705f522 accepted+--------------------------------------+----------------------------------+----------+| Image ID                             | Member ID                        | Status   |+--------------------------------------+----------------------------------+----------+| 5c2a66a5-dcbb-4d2d-b35d-1264d9bfea69 | 8358c987bb8f4d9fa09485f94705f522 | accepted |+--------------------------------------+----------------------------------+----------+1.2.3 Glance 镜像压缩在 HTTP 文件服务器中存在一个镜像为 CentOS 的镜像, 请对该镜像进行压缩操作。[root@controller ~]# qemu-img convert -c -O qcow2 http://192.168.100.91/image/iaas/CentOS7.5-compress.qcow2  /root/chinaskill-js-compress.qcow21.2.4 Nova 清除缓存在 OpenStack 平台上, 修改相关配置,  让长时间不用的镜像缓存在过一定的时间后会被自动删除。[root@controller ~]# vim /etc/nova/nova.confremove_unused_base_images=true # 取消注释 1.2.5 Glance 对接 Cinder 存储在自行搭建的 OpenStack 平台中修改相关参数,使 Glance 可以使用 Cinder作为后端存储。[root@controller ~]# vi /etc/glance/glance-api.confshow_multiple_locations = true[glance_store]stores = file,http,cinderdefault_store = cinderfilesystem_store_datadir = /var/lib/glance/images/cinder_store_address=http://controller :       5000/v3cinder_store_user_name=glancecinder_store_project_name=cirros-cinder[root@controller ~]# vim /etc/cinder/cinder.confallowed_direct_url_schemes = cinderimage_upload_use_internal_tenant = true[root@controller ~]# systemctl restart openstack-*[root@controller ~]# glance image-create --name cirros-image --disk-format qcow2 --container-format bare < cirros-0.3.4-x86_64-disk.img1.2.6  使用 Heat 模板创建存储容器在自行搭建的 OpenStack 私有云平台上,在/root 目录下编写 Heat 模板文件,要求执行 yaml 文件可以创建名为 heat-swift 的容器。[root@controller ~]# vim create_container.yamlheat_template_version: 2015-10-15resources:  container:    type: OS::Swift::Container    properties:      name: heat-swift[root@controller ~]# openstack stack create -t create_container.yaml test[root@controller ~]# openstack stack delete test -y 1.2.7 SkyWalking  应用部署申请一台云主机,使用提供的软件包安装 Elasticsearch 服务和 SkyWalking 服 务。再申请一台云主机,用于搭建 gpmall 商城应用,并配置 SkyWalking  监控gpmall 主机。1.2.8 Redis 集群部署部署 Redis 集群, Redis 的一主二从三哨兵架构。# 修改redis1节点配置文件[root@redis1 ~]# vi /etc/redis.conf# bind 127.0.0.1   # 加上注释protected-mode no  # 修改为nodaemonize yes      # 改为yes# requirepass foobared 下方添加:requirepass "123456"# masterauth <master-password> 下方添加:masterauth "123456"[root@redis1 ~]# systemctl restart redis# 修改redis2、redis3节点[root@redis2 ~]# vi /etc/redis.conf# bind 127.0.0.1 # 加注释protected-mode no # 改为nodaemonize yes # 改为yes# requirepass foobared 下方添加:requirepass "123456"# masterauth <master-password> 下方添加:masterauth "123456"# slaveof <masterip> <masterport> 下方添加:slaveof 192.168.100.115 6379[root@redis2 ~]# systemctl restart redisredis3 同理[root@redis1 ~]# systemctl restart redis             [root@redis1 ~]# redis-cli -a 123456 info replicationconnected_slaves:2[root@redis1 ~]# vi /etc/sentinel.confport 26379protected-mode no  # 保护模式daemonize yessentinel monitor redis1 192.168.100.115 6379 2sentinel auth-pass redis1 123456[root@redis2 ~]# vi /etc/sentinel.confport 26380protected-mode nodaemonize yessentinel monitor redis1 192.168.100.115 6379 2sentinel auth-pass redis1 123456[root@redis3 ~]# vi /etc/sentinel.confport 26381protected-mode nodaemonize yes sentinel monitor redis1 192.168.100.115 6379 2sentinel auth-pass redis1 123456# 启动哨兵模式[root@redis1 ~]# redis-server /etc/sentinel.conf --sentinel[root@redis2 ~]# redis-server /etc/sentinel.conf --sentinel[root@redis3 ~]# redis-server /etc/sentinel.conf --sentinel
  • [技术干货] KFC运维1
    【题目 5】1.1.5 数据库安装与调优[0.5 分]在控制节点上使用安装 Mariadb、RabbitMQ 等服务。并进行相关操作。在 controller 节点上使用 iaas-install-mysql.sh 脚本安装 Mariadb、Memcached、RabbitMQ 等服务。安装服务完毕后, 修改/etc/my.cnf 文件, 完成下列要求:Iaas-install-mysql.shvim /etc/my.cnf[mysqld]1.设置数据库支持大小写;lower_case_table_names = 12.设置数据库缓存 innodb 表的索引, 数据,插入数据时的缓冲为 4G;innodb_buffer_pool_size = 4G 3.设置数据库的 log buffer 为 64MB;innodb_log_buffer_size = 64M 4.设置数据库的 redo log 大小为 256MB;innodb_log_file_size = 256M5.设置数据库的 redo log 文件组为 2。innodb_log_files_in_group = 2vim /etc/sysconfig/memcached6.修改Memcached 的相关配置, 将内存占用大小设置为 512MB,调整最大连接数参数为2048;将MAXCONN后参数修改为2048  CACHESIZE后参数修改为5127.调整 Memcached 的数据摘要算法(hash)为 md5;hash_algorithm=md5完成后提交控制节点的用户名、密码和 IP 地址到答题框。 【题目 6】1.1.6 Keystone 服务安装与使用[0.5 分]在控制节点上安装 Keystone 服务并创建用户。在 controller 节点上使用 iaas-install-keystone.sh 脚本安装 Keystone 服务。然后创建 OpenStack 域 210Demo,其中包含 Engineering 与 Production 项目,在域210Demo 中创建组 Devops,其中需包含以下用户:iaas-install-keystone.shsource /etc/keystone/admin-openrc.sh    #激活openstack命令openstack domain create 210Demo    #创建域openstack group create devops --domain 210Demo  #创建组devops#下面2个为创建项目openstack project create Engineering --domain 210Demo openstack project create Production --domain 210Demo1.Robert 用户是 Engineering 项目的用户(member)与管理员(admin), email 地址为: Robert@lab.example.com。openstack user create --domain 210Demo --project Engineering Robert --email Robert@lab.example.com2.George 用 户 是 Engineering 项 目 的 用 户 (member ) , email 地 址 1George@lab.example.com。openstack user create --domain 210Demo --project Engineering George --email George@lab.example.com3.William 用户是 Production 项目的用户(member)与管理员(admin), email 地址为: William@lab.example.com。openstack user create --domain 210Demo --project Production William --email William@lab.example.com4.John 用 户 是 Production 项 目 的 用 户 (member ) , email 地 址 为 :John@lab.example.com。openstack user create --domain 210Demo --project Production John --email John@lab.example.com#将使用者添加到项目下 openstack role add --user Robert --project Engineering member  openstack role add --user Robert --project Engineering admin openstack role add --user George --project Engineering member openstack role add --user William --project Production member        openstack role add --user William --project Production admin openstack role add --user John --project Production member 完成后提交控制节点的用户名、密码和 IP 地址到答题框。【题目 7】1.1.7 Glance 安装与使用[0.5 分]在控制节点上安装 Glance 服务。上传镜像至平台,并设置镜像启动的要求参数。在 controller 节点上使用 iaas-install-glance.sh 脚本安装 glance 服务。然后使 用提供的 coreos_production_pxe.vmlinuz 镜像(该镜像为 Ironic Deploy 镜像, 是一个 AWS 内核格式的镜像,在 OpenStack Ironic 裸金属服务时需要用到)上传到 OpenStack 平台中, 命名为 deploy-vmlinuz。完成后提交控制节点的用户名、密码和 IP 地址到答题框。iaas-install-glance.sh用ftp挂载cirros-0.3.4-x86_64-disk.img镜像 glance image-create --name cirros --disk-format qcow2 --container-format bare < cirros-0.3.4-x86_64-disk.img --min-disk 10 --min-ram 1024  写入镜像【题目 8】1.1.8 Nova 安装与优化[0.5 分]在控制节点和计算节点上分别安装 Nova 服务。安装完成后,完成 Nova 相关配置。在 controller 节点和 compute 节点上分别使用 iaas-install-placement.sh 脚本、 iaas-install-nova -controller.sh 脚本、iaas-install-nova-compute.sh 脚本安装 Nova 服务。在 OpenStack 中, 修改相关配置文件,修改调度器规则采用缓存调度器,缓存主机信息, 提升调度时间。配置完成后提交controller 点的用户名、密码和IP地址到答题框。Controller节点  iaas-install-placement.sh   iaas-install-nova -controller.shCompute节点  iaas-install-nova-compute.sh返回controller节点 vim /etc/nova/nova.conf4510行修改为  driver = caching_scheduler【题目 9】1.1.9 Neutron 安装[0.2 分]在控制和计算节点上正确安装 Neutron 服务。使用提供的脚本 iaas-install-neutron-controller.sh 和 iaas-install-neutron- compute.sh,在 controller 和 compute 节点上安装 neutron 服务。完成后提交控制节点的用户名、密码和 IP 地址到答题框。Controller节点:iaas-install-neutron-controller.shCompute节点:iaas-install-neutron-compute.sh【题目 10】1.1.10 Dashboard 安装[0.5 分]在控制节点上安装 Dashboard 服务。安装完成后,将 Dashboard 中的 Django 数据修改为存储在文件中。在 controller 节点上使用 iaas-install-dashboard.sh 脚本安装 Dashboard 服务。安装完成后,修改相关配置文件,完成下列两个操作:Iaas-install-dashboard.shvim /etc/openstack-dashboard/local_settings1.使得登录 Dashboard 平台的时候不需要输入域名;\OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = False2.将 Dashboard 中的 Django 数据修改为存储在文件中。SESSION_ENGINE = 'django.contrib.sessions.backends.file'完成后提交控制节点的用户名、密码和 IP 地址到答题框。【题目 11】1.1.11 Swift 安装[0.5 分]在控制节点和计算节点上分别安装 Swift 服务。安装完成后, 将 cirros 镜像进行分片存储。在控制节点和计算节点上分别使用 iaas-install-swift-controller.sh 和 iaas- install-swift-compute.sh 脚本安装 Swift 服务。安装完成后,使用命令创建一个名叫 examcontainer 的容器, 将 cirros-0.3.4-x86_64-disk.img 镜像上传到 examcontainer 容 器中, 并设置分段存放, 每一段大小为 10M。完成后提交控制节点的用户名、密码和 IP 地址到答题框。在controller下  iaas-install-swift-controller.sh 在compute下  iaas-install-swift-compute.sh 在controller下  source /etc/keystone/admin-openrc.sh     #刷新系统变量 swift post examcontainer   #创建容器swift upload -S 10M examcontainer cirros-0.3.4-x86_64-disk.img  #上传镜像【题目 12】1.1.12 Cinder 创建硬盘[0.5 分]在控制节点和计算节点分别安装 Cinder 服务, 请在计算节点, 对块存储进行扩容操作。在 控 制 节 点 和 计 算 节 点 分 别 使 用 iaas-install-cinder-controller.sh 、 iaas- install-cinder-compute.sh 脚本安装 Cinder 服务,请在计算节点,对块存储进行扩容操 作,即在计算节点再分出一个 5G 的分区,加入到 cinder 块存储的后端存储中去。完成后提交计算节点的用户名、密码和 IP 地址到答题框。在controller下  iaas-install-cinder-controller.sh在compute下  iaas-install-cinder-compute.sh在compute下 fdisk /dev/sdb  创建一个5G的磁盘  创建完未显示重启一下虚拟机pvcreate /dev/sdb4   #检查创建情况vgextend cinder-volumes /dev/sdb4  #检查vgs   #检查磁盘创建【题目 13】1.1.13 配置主机禁 ping [0.5 分]修改 controller 节点的相关配置文件,配置 controller 节点禁止其他节点可以 ping它。配置完之后。完成后提交 controller 节点的用户名、密码和 IP 地址到答题框。vim  /etc/sysctl.confnet.ipv4.icmp_echo_ignore_all = 1### 1、keystone权限控制(1分)使用自行搭建的OpenStack私有云平台,修改普通用户权限,使普通用户不能对镜像进行创建和删除操作,配置完成后提交控制节点的用户名、密码和IP地址到答题框。```plain[root@controller ~]# vi /etc/glance/policy.json"add_image": "","delete_image": "",改为"add_image": "role:admin", "delete_image": "role:admin",```### 2、OpenStack Glance镜像压缩(1分)使用自行搭建的OpenStack平台。在HTTP服务中存在一个镜像为CentOS7.5-compress.qcow2的镜像,请使用qemu相关命令,对该镜像进行压缩,压缩后的镜像命名为chinaskill-js-compress.qcow2并存放在/root目录下。完成后提交控制节点的用户名、密码和IP地址到答题框。```plain[root@controller ~]# qemu-img convert -c -O qcow2 http://192.168.100.91/image/iaas/CentOS7.5-compress.qcow2 /root/chinaskill-js-compress.qcow2-c 压缩-O qcow2 输出格式为qcow2http://192.168.100.91/image/iaas/CentOS7.5-compress.qcow2 被压缩文件路径/root/chinaskill-js-compress.qcow2 压缩完成后的文件路径```### 3、Glance开放镜像(1分)使用自行搭建的OpenStack私有云平台。使用提供的cirros-0.3.4-x86\_64-disk.img镜像文件(镜像文件在提供的HTTP服务中)在admin项目中创建名为glance-cirros的镜像,通过命令将glance-cirros镜像指定demo项目可以进行共享使用。完成后提交云主机的用户名、密码和IP地址到答题框。```plain[root@controller ~]# glance image-create --name glance-cirros --disk-format qcow2 --container-format bare < cirros-0.3.4-x86_64-disk.img[root@controller ~]# openstack project list+----------------------------------+---------+| ID                               | Name    |+----------------------------------+---------+| 6262c028ff5a4f3589825aca15059e65 | demo    || b4e0524850e74eaab78446350a33df56 | admin   || f6d623c62f5a4319b62ccaa2cd301450 | service |+----------------------------------+---------+[root@controller ~]# openstack image list +--------------------------------------+---------------+--------+| ID                                   | Name          | Status |+--------------------------------------+---------------+--------+| 817641c0-2640-45de-94ab-6a9c8ec190a3 | cirros        | active || db2392d3-cc0b-4b4e-85e8-a68b007869dd | glance-cirros | active |+--------------------------------------+---------------+--------+# 镜像id在前[root@controller ~]# glance member-create db2392d3-cc0b-4b4e-85e8-a68b007869dd 6262c028ff5a4f3589825aca15059e65+--------------------------------------+----------------------------------+---------+| Image ID                             | Member ID                        | Status  |+--------------------------------------+----------------------------------+---------+| db2392d3-cc0b-4b4e-85e8-a68b007869dd | 6262c028ff5a4f3589825aca15059e65 | pending |+--------------------------------------+----------------------------------+---------+[root@controller ~]# glance member-update db2392d3-cc0b-4b4e-85e8-a68b007869dd 6262c028ff5a4f3589825aca15059e65 accepted+--------------------------------------+----------------------------------+----------+| Image ID                             | Member ID                        | Status   |+--------------------------------------+----------------------------------+----------+| db2392d3-cc0b-4b4e-85e8-a68b007869dd | 6262c028ff5a4f3589825aca15059e65 | accepted |+--------------------------------------+----------------------------------+----------+```### 4、OpenStack Heat运维:创建云主机类型(1分)在openstack私有云平台上,在/root目录下编写模板server.yaml,创建名为“m1.flavor”、 ID 为 1234、内存为1024MB、硬盘为20GB、vcpu数量为 1的云主机类型。完成后提交控制节点的用户名、密码和IP地址到答题框。(在提交信息前请准备好yaml模板执行的环境)```plain# 查询type语法[root@controller ~]# heat resource-type-list[root@controller ~]# vim server.yamlheat_template_version: 2015-10-15resources:  server:    type: OS::Nova::Flavor    properties:      name: m1.flavor      flavorid: 1234      ram: 1024      disk: 20      vcpus: 1[root@controller ~]# openstack stack create -t server.yaml test +---------------------+--------------------------------------+| Field               | Value                                |+---------------------+--------------------------------------+| id                  | aa12f368-2d72-45ac-a36d-0615261a8bc2 || stack_name          | test                                 || description         | No description                       || creation_time       | 2022-09-23T07:58:19Z                 || updated_time        | None                                 || stack_status        | CREATE_IN_PROGRESS                   || stack_status_reason | Stack CREATE started                 |+---------------------+--------------------------------------+[root@controller ~]# openstack stack delete test -y```### 5、OpenStack Heat运维:创建用户(1分)在自行搭建的 OpenStack 私有云平台或赛项提供的 all-in-one 平台上,在/root 目录下编写 Heat 模板 create\_user.yaml,创建名为 heat-user 的用户,属于 admin 项目,并赋予 heat-user 用户 admin 的权限,配置用户密码为 123456。完成后提交控制节点的用户名、密码和IP地址到答题框。(在提交信息前请准备好yaml模板执行的环境)```plain[root@controller ~]# vim create_user.yamlheat_template_version: 2015-10-15resources:  user:    type: OS::Keystone::User    properties:      name: heat-user      password: 123456      domain: demo      roles:      - role: admin  # 角色        project: admin  # 项目      default_project: admin  # 默认项目[root@controller ~]# openstack stack create -t create_user.yaml test[root@controller ~]# openstack stack delete test -y```### 6、OpenStack Heat运维:创建网络(1分)在自行搭建的 OpenStack 私有云平台或赛项提供的 all-in-one 平台上,在/root 目录下编写 Heat 模板 create\_net.yaml,创建名为 Heat-Network 网络,选择不共享;创建子网名为Heat-Subnet,子网网段设置为10.20.2.0/24,开启DHCP 服务,地址池为10.20.2.20-10.20.2.100。完成后提交所修改配置文件节点的IP地址、用户名和密码到答题框。```plain[root@controller ~]# vim create_net.yamlheat_template_version: 2015-10-15resources:  network:    type: OS::Neutron::Net    properties:      name: Heat-Network      admin_state_up: true  # 管理员状态      shared: false  subnet:    type: OS::Neutron::Subnet    properties:      name: Heat-Subnet      network_id:        get_resource: network      cidr: 10.20.2.0/24  # 子网网段      allocation_pools:  # 分配池      - start: 10.20.2.20  # 开始        end: 10.20.2.100  # 结束      enable_dhcp: true  # 启用dhcp [root@controller ~]# openstack stack create -t create_net.yaml test [root@controller ~]# openstack stack delete test -y```### 7、虚拟机调整flavor(1分)使用OpenStack私有云平台,使用centos7.9镜像(请自行上传),flavor使用1vcpu/2G内存/40G硬盘,创建云主机cscc\_vm,假设在使用过程中,发现该云主机配置太低,需要调整,请修改相应配置,将dashboard界面上的云主机调整实例大小可以使用,将该云主机实例大小调整为2vcpu/4G内存/40G硬盘。完成后提交所修改配置文件节点的IP地址、用户名和密码到答题框。```plain[root@controller ~]# vim /etc/nova/nova.confallow_resize_to_same_host=false改为allow_resize_to_same_host=true```### 8、OpenStack Cinder运维:数据加密(1分)使用自行创建的OpenStack云计算平台,通过相关配置,开启Cinder块存储的数据加密功能,然后创建加密卷类型luks,并配置卷类型luks使用带有512位密钥,Cipher使用aes-xts-plain64,Control Location使用front-end,Provider使用nova.volume.encryptors.luks.LuksEncryptor,最后分别创建两个大小为1G的云硬盘,一个是普通云硬盘,另一个使用加密卷类型。完成后提交所修改配置文件节点的IP地址、用户名和密码到答题框。```plain[root@controller ~]# vim /etc/nova/nova.conf[key_manager]添加:api_class=nova.keymgr.conf_key_mgr.ConfkeyManager[root@controller ~]# vim /etc/cinder/cinder.conf[key_manager]添加:api_class=cinder.keymgr.conf_key_mgr.ConfKeyManager```### 9、快照管理(1分)在openstack私有云平台上,创建一台云主机,flavor使用2vcpu/4G内存/40G硬盘。创建成功后,将云主机打快照并保存到controller节点/root/cloudsave目录下,保存名字为csccvm.qcow2。最后使用qemu-img相关命令,将镜像的campat版本修改为0.10(该操作是为了适配某些低版本的云平台)。完成后提交控制节点的用户名、密码和IP地址到答题框。```plain[root@controller ~]# mkdir /root/cloudsave[root@controller ~]# cd /root/cloudsave/[root@controller cloudsave]# openstack image list[root@controller cloudsave]# openstack image save --file csccvm.qcow2 cirros[root@controller cloudsave]# qemu-img amend -f qcow2 -o compat=0.10 csccvm.qcow2[root@controller cloudsave]# qemu-img info csccvm.qcow2compat: 0.10```### 10、修改glance存储后端(1分)在提供的OpenStack私有云平台,创建一台云主机(flavor使用带临时磁盘50G的),配置该主机为nfs的server端,将该云主机中的/mnt/test目录进行共享(目录不存在可自行创建)。然后配置controller节点为nfs的client端,要求将/mnt/test目录作为glance后端存储的挂载目录。成功后提交控制节点的用户名、密码和IP地址到答题框。```plain[root@compute ~]# systemctl start rpcbind[root@compute ~]# systemctl start nfs[root@compute ~]# vi /etc/exports/mnt/test *(rw,sync,no_root_squash)[root@compute ~]# mkdir /mnt/test[root@compute ~]# exportfs -r[root@compute ~]# showmount -e localhostExport list for localhost:/mnt/test                                                      */var/lib/manila/mnt/share-b7f82a14-9804-46c3-bcfa-6accbe65ae20 127.0.0.0/24,192.168.100.142[root@controller ~]# showmount -e 192.168.100.226Export list for 192.168.100.226:/mnt/test                                                      */var/lib/manila/mnt/share-b7f82a14-9804-46c3-bcfa-6accbe65ae20 127.0.0.0/24,192.168.100.142[root@controller ~]# mount -t nfs 192.168.100.226:/mnt/test /var/lib/glance/images/[root@controller ~]# df -h192.168.100.226:/mnt/test     100G  2.0G   99G   2% /var/lib/glance/images[root@controller ~]# chown glance:glance /var/lib/glance/images/[root@controller ~]# ls -l /var/lib/glance/total 0drwxr-xr-x. 2 glance glance 6 Sep 23 08:40 images```### 11、swift配置glance后端存储(1分)使用OpenStack私有云平台,使用Swift对象存储服务,修改相应的配置文件,使对象存储Swift作为glance镜像服务的后端存储,使默认上传的镜像会在swift中创建chinaskill\_glance容器。配置完成后上传镜像测试。成功后提交控制节点的用户名、密码和IP地址到答题框。```plain[root@controller ~]# vi /etc/glance/glance-api.conf[glance_store]stores = file,http,swiftdefault_store = swiftfilesystem_store_datadir = /var/lib/glance/images/swift_store_auth_address=http://controller:5000/v3swift_store_multi_tenant=Trueswift_store_admin_tenants=serviceswift_store_user=glanceswift_store_container=chinaskill_glanceswift_store_create_container_on_put=True[root@controller ~]# systemctl restart openstack-glance*[root@controller ~]# glance image-create --name tes1t --disk-format raw --container-format docker < cirros-0.3.4-x86_64-disk.img[root@controller ~]# swift listchinaskill_glance_c29a53cf-0bc9-40b2-aa33-f8fee3580d51```### 12、OpenStack Glance对接cinder后端存储(1分)在自行搭建的OpenStack平台中修改相关参数,使glance可以使用cinder作为后端存储,将镜像存储于cinder卷中。使用cirros-0.3.4-x86\_64-disk.img文件创建cirros-image镜像存储于cirros-cinder卷中,通过cirros-image镜像使用cinder卷启动盘的方式进行创建虚拟机。完成后提交控制节点的用户名、密码和IP地址到答题框。```plain[root@controller ~]# vi /etc/glance/glance-api.confshow_multiple_locations = true[glance_store]stores = file,http,cinderdefault_store = cinderfilesystem_store_datadir = /var/lib/glance/images/cinder_store_address=http://controller:5000/v3cinder_store_user_name=glancecinder_store_project_name=cirros-cinder[root@controller ~]# vim /etc/cinder/cinder.confallowed_direct_url_schemes = cinderimage_upload_use_internal_tenant = true[root@controller ~]# systemctl restart openstack-*[root@controller ~]# glance image-create --name cirros-image --disk-format qcow2 --container-format bare < cirros-0.3.4-x86_64-disk.img-----------------评分--------------------[root@controller ~]# vi /etc/glance/glance-api.conf修改:#show_multiple_locations = falseshow_multiple_locations = True[root@controller ~]# vi /etc/cinder/cinder.conf修改:#image_upload_use_internal_tenant = falseimage_upload_use_internal_tenant = Trueallowed_direct_url_schemes = cinder```### 13、修改文件句柄数(1分)Linux服务器大并发时,往往需要预先调优Linux参数。默认情况下,Linux最大文件句柄数为1024个。当你的服务器在大并发达到极限时,就会报出“too many open files”。创建一台云主机,修改相关配置,将Linux最大文件句柄数永久修改为65535。完成后提交控制节点的用户名、密码和IP地址到答题框。```plain[root@controller ~]# ulimit -n  # 查看当前句柄1024[root@controller ~]# echo "* soft nofile 65535" >> /etc/security/limits.conf [root@controller ~]# echo "* hard nofile 65535" >> /etc/security/limits.conf# 退出重新登陆[root@controller ~]# ulimit -n65535```### 14、OpenStack Nova超时时间(1分)在OpenStack平台中,由于Python的单进程不能真正的并发,所以RPC请求可能不能及时响应,尤其是目标节点在执行耗时较长的定时任务时,所以需要综合考虑超时时间和等待容忍时间。修改Nova的相关配置文件,将超时时间延长至300。配置完成后提交改动节点的用户名、密码和IP地址到答题框。```plain[root@controller ~]# vim /etc/nova/nova.confrpc_response_timeout=60改为rpc_response_timeout=300```### 15、Linux内核调优(1分)在使用Linux服务器的时候,TCP协议规定,对于已经建立的连接,网络双方要进行四次挥手才能成功断开连接,如果缺少了其中某个步骤,将会使连接处于假死状态,连接本身占用的资源不会被释放。因为服务器程序要同时管理大量连接,所以很有必要保证无用的连接完全断开,否则大量僵死的连接会浪费许多服务器资源。创建一台CentOS7.9云主机,修改相应的配置文件,分别开启SYN Cookies;允许将TIME-WAIT sockets重新用于新的TCP连接;开启TCP连接中TIME-WAIT sockets的快速回收;修改系統默认的TIMEOUT时间为30。完成后提交修改节点的用户名、密码和IP地址到答题框。```plain[root@controller ~]# vim /etc/sysctl.confnet.ipv4.tcp_syncookies = 1net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_tw_recycle = 1net.ipv4.tcp_fin_timeout = 30[root@controller ~]# sysctl -p            net.ipv4.icmp_echo_ignore_all = 0vm.dirty_expire_centisecs = 6000vm.swappiness = 20net.ipv4.tcp_syncookies = 1net.bridge.bridge-nf-call-iptables = 1net.bridge.bridge-nf-call-ip6tables = 1net.ipv4.tcp_syncookies = 1net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_tw_recycle = 1net.ipv4.tcp_fin_timeout = 30```### 16、OpenStack参数调优(1分)OpenStack 各服务内部通信都是通过 RPC 来交互,各 agent 都需要去连接 RabbitMQ;随着各服务 agent 增多,MQ 的连接数会随之增多,最终可能会到达上限,成为瓶颈。在自行搭建的 OpenStack 私有云平台或赛项提供的 all-in-one 平台上,分别通过用户级别、系统级别、配置文件来设置 RabbitMQ 服务的最大连接数为10240。完成后提交控制节点的用户名、密码和IP地址到答题框。```plain[root@controller ~]# vim /etc/security/limits.conf添加:openstack soft nofile 10240openstack hard nofile 10240[root@controller ~]# vim /etc/sysctl.conf添加:fs.file-max = 10240[root@controller ~]# vim /usr/lib/systemd/system/rabbitmq-server.service[Service]添加:LimitNOFILE=10240[root@controller ~]# systemctl daemon-reload[root@controller ~]# systemctl restart rabbitmq-server[root@controller ~]# rabbitmqctl status[{total_limit,10140},```### 17、Nova内存保留(1分)在OpenStack中,默认的CPU超配比例是1:16,内存超配比例是1:1.5。当宿主机使用swap交换分区来为虚拟机分配内存的时候,则虚拟机的性能将急速下降。生产环境上一般不建议开启内存超售(建议配置比例1:1)。请编辑nova.conf文件,将内存预留量配置为4GB,保证该部分内存不能被虚拟机使用。配置完成后提交改动节点的用户名、密码和IP地址到答题框。```plainreserved_host_memory_mb=4096```### 18、KVM调优(1分)在自行搭建的 OpenStack 私有云平台或赛项提供的 all-in-one 平台上,修改相关配置文件,启用-device virtio-net-pci in kvm。完成后提交控制节点的用户名、密码和IP地址到答题框。```plain[root@controller ~]# vim /etc/nova/nova.confuse_virtio_for_bridges=true改为:--libvirt_use_virtio_for_bridges=true```### 19、KVM I/O优化(1分)使用自行搭建的OpenStack私有云平台,优化KVM的I/O调度算法,将默认的模式修改为none模式。完成后提交控制节点的用户名、密码和IP地址到答题框。```plain[root@controller ~]# echo none > /sys/block/vda/queue/scheduler  [root@controller ~]# echo none > /sys/block/vdb/queue/scheduler [root@controller ~]# cat /sys/block/vd*/queue/scheduler[none] mq-deadline kyber [none] mq-deadline kyber ```### 20、redis服务调优-AOF(1分)使用提供的OpenStack私有云平台,申请一台centos7.9系统的云主机,使用提供的http源,自行安装Redis服务并启动。在Redis中,AOF配置为以三种不同的方式在磁盘上执行write或者fsync。假设当前Redis压力过大,请配置Redis不执行fsync。除此之外,避免AOF文件过大,Redis会进行AOF重写,生成缩小的AOF文件。请修改配置,让AOF重写时,不进行fsync操作。配置完成后提交Redis节点的用户名、密码和IP地址到答题框。```plain[root@controller ~]# yum install redis -y[root@controller ~]# vim /etc/redis.confno-appendfsync-on-rewrite no改为:no-appendfsync-on-rewrite yesappendfsync no[root@controller ~]# systemctl restart redis```### 21、Redis服务调优-内存大页(1分)使用提供的OpenStack私有云平台,申请一台centos7.9系统的云主机,使用提供的http源,自行安装Redis服务并启动。因为Redis服务采用了内存大页,生成RDB期间,即使客户端修改的数据只有50B的数据,Redis需要复制2MB的大页。当写的指令比较多的时候就会导致大量的拷贝,导致性能变慢。请修改Redis的内存大页机制,规避大量拷贝时的性能变慢问题。配置完成后提交Redis节点的用户名、密码和IP地址到答题框。```plain[root@controller ~]# echo never > /sys/kernel/mm/transparent_hugepage/enabled [root@controller ~]# cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never]```### 22、Raid磁盘阵列管理(1分)在OpenStack私有云平台,创建一台云主机(镜像使用CentOS7.9,flavor可自定义),并创建一个40G大小的cinder块存储,将块存储连接到云主机,然后在云主机上对云硬盘进行操作。要求分出4个大小为5G的分区,使用这4个分区,创建名为/dev/md5、raid级别为5的磁盘阵列加一个热备盘(/dev/vdb4为热备盘)。完成后提交云主机的用户名、密码和IP地址到答题框。```plain[root@raid ~]# mdadm -Cv /dev/md5 -l5 -n3 /dev/vdb[1-3] --spare-devices=1 /dev/vdb4[root@raid ~]# mdadm -D /dev/md5```### 23、堡垒机安装与使用(1分)使用提供的OpenStack平台申请一台云主机,使用提供的软件包安装JumpServer堡垒机服务,并配置使用该堡垒机对接自己安装的controller和compute节点。完成后提交JumpServer节点的用户名、密码和IP地址到答题框。```plain[root@jum ~]# curl -O http://192.168.100.91/image/iaas/jumpserver.tar.gz[root@jum ~]# tar -zxvf jumpserver.tar.gz -C /opt/[root@jum ~]# rm -rf /etc/yum.repos.d/*[root@jum ~]# vi /etc/yum.repos.d/local.repo[jum]name=jumbaseurl=file:///opt/jumpserver-repogpgcheck=0enabled=1[root@jum ~]# yum install python2 -y# 安装docker[root@jum ~]# cp -rf /opt/docker/* /usr/bin/[root@jum ~]# chmod 775 /usr/bin/docker*[root@jum ~]# cp -rf /opt/docker.service /etc/systemd/system/[root@jum ~]# chmod 775 /etc/systemd/system/docker.service [root@jum ~]# systemctl restart docker# 安装jum[root@jum ~]# cd /opt/images/[root@jum images]# sh load.sh[root@jum images]# mkdir -p /opt/jumpserver/{core,lion,koko,mysql,nginx,redis}[root@jum images]# cp -rf /opt/config/ /opt/jumpserver[root@jum images]# cd /opt/[root@jum opt]# source static.env[root@jum opt]# cd compose/[root@jum compose]# sh up.shCreating network "jms_net" with driver "bridge"Creating jms_mysql ... doneCreating jms_redis ... doneCreating jms_core  ... doneCreating jms_nginx  ... doneCreating jms_lina   ... doneCreating jms_koko   ... doneCreating jms_lion   ... doneCreating jms_celery ... doneCreating jms_luna   ... done```### 24、skywalking服务部署(1分)使用提供的OpenStack私有云平台,申请一台centos7.9系统的云主机,使用提供的软件包安装elk服务和skywalking服务,将skywalking的UI访问端口修改为8888。接下来再申请一台CentOS7.9的云主机,用于搭建gpmall商城应用,并配置SkyWalking Agent,将gpmall的jar包放置探针并启动。安装与配置完成后提交skywalking节点的用户名、密码和IP地址到答题框。安装与配置完成后提交该节点的用户名、密码和IP地址到答题框。```plain[root@vm1 ~]# hostnamectl set-hostname node-1[root@node-2 ~]# hostnamectl set-hostname node-2[root@node-1 ~]# curl -O http://192.168.100.91/image/iaas/skywalking.tar.gz[root@node-1 ~]# tar -zxvf skywalking.tar.gz -C /opt/[root@node-1 ~]# tar -zxvf /opt/skywalking/elasticsearch-7.17.0-linux-x86_64.tar.gz -C /opt/[root@node-1 ~]# cd /opt/elasticsearch-7.17.0/[root@node-1 elasticsearch-7.17.0]# mkdir data[root@node-1 elasticsearch-7.17.0]# vi config/elasticsearch.ymlcluster.name: my-application # 取消注释node.name: node-1 # 取消注释path.data: /opt/elasticsearch-7.17.0/data # 取消注释并修改path.logs: /opt/elasticsearch-7.17.0/logs # 取消注释并修改network.host: 0.0.0.0 # 取消注释并修改cluster.initial_master_nodes: ["node-1"] # 取消注释并修改# 添加下面三行:http.cors.enabled: truehttp.cors.allow-origin: "*"http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type# 创建Elasticsearch启动用户,并设置属组及权限[root@node-1 elasticsearch-7.17.0]# groupadd elsearch[root@node-1 elasticsearch-7.17.0]# useradd elsearch -g elsearch -p elasticsearch[root@node-1 elasticsearch-7.17.0]# chown -R elsearch:elsearch /opt/elasticsearch-7.17.0# 修改资源限制及内核配置,添加如下内容:[root@node-1 elasticsearch-7.17.0]# vi /etc/security/limits.conf* hard nofile 65536* soft nofile 65536[root@node-1 elasticsearch-7.17.0]# vi /etc/sysctl.confvm.max_map_count=262144[root@node-1 elasticsearch-7.17.0]# sysctl -pvm.max_map_count = 262144[root@node-1 ~]# cd /opt/elasticsearch-7.17.0/[root@node-1 elasticsearch-7.17.0]# su elsearch[elsearch@node-1 elasticsearch-7.17.0]$ ./bin/elasticsearch -d[root@node-1 ~]# netstat -ntlp# 9200[root@node-1 ~]# tar -zxvf /opt/skywalking/jdk-8u144-linux-x64.tar.gz -C /usr/local/[root@node-1 ~]# vi /etc/profileexport JAVA_HOME=/usr/local/jdk1.8.0_144export CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jarexport PATH=$PATH:${JAVA_HOME}/bin[root@node-1 ~]# source /etc/profile[root@node-1 ~]# tar -zxvf /opt/skywalking/apache-skywalking-apm-es7-8.0.0.tar.gz -C /opt/[root@node-1 ~]# cd /opt/apache-skywalking-apm-bin-es7/[root@node-1 apache-skywalking-apm-bin-es7]# vi config/application.ymlstorage:  selector: ${SW_STORAGE:elasticsearch7} # 修改为elasticsearch7  elasticsearch:  elasticsearch7:    nameSpace: ${SW_NAMESPACE:""}    clusterNodes: ${SW_STORAGE_ES_CLUSTER_NODES:192.168.100.169:9200} # 修改ip    [root@node-1 apache-skywalking-apm-bin-es7]# ./bin/oapService.shSkyWalking OAP started successfully![root@node-1 apache-skywalking-apm-bin-es7]# netstat -ntlp1280011800[root@node-1 apache-skywalking-apm-bin-es7]# vi webapp/webapp.yml改为:server:  port: 8888[root@node-1 apache-skywalking-apm-bin-es7]# ./bin/webappService.sh SkyWalking Web Application started successfully![root@node-1 apache-skywalking-apm-bin-es7]# netstat -ntlp8888[root@node-1 apache-skywalking-apm-bin-es7]# cp -rvf /opt/apache-skywalking-apm-bin-es7/agent/ /root/nc -lk 监听端口[root@raid ~]# nc -lk 9200 &                 [1] 7718[root@raid ~]# nc -lk 12800 &[2] 7719[root@raid ~]# nc -lk 11800 &[3] 7720[root@raid ~]# nc -lk 8888 & [4] 7722[root@node-1 apache-skywalking-apm-bin-es7]# cp -rvf /opt/apache-skywalking-apm-bin-es7/agent/ /root/[root@node-1 ~]# ls /root/agentactivations  bootstrap-plugins  config  logs  optional-plugins  plugins  skywalking-agent.jar```### 25、Redis一主二从三哨兵模式(1分)使用提供的OpenStack私有云平台,申请三台CentOS7.9系统的云主机,使用提供的http源,在三个节点自行安装Redis服务并启动,配置Redis的访问需要密码,密码设置为123456。然后将这三个Redis节点配置为Redis的一主二从三哨兵架构,即一个Redis主节点,两个从节点,三个节点均为哨兵节点。配置完成后提交Redis主节点的用户名、密码和IP地址到答题框。```plain# 三台节点安装redis[root@redis1 ~]# rm -rf /etc/yum.repos.d/*[root@redis1 ~]# vi /etc/yum.repos.d/http.repo[centos]name=centosbaseurl=http://192.168.100.91/image/iaas/centos7.9/gpgcheck=0enabled=1[root@redis1 ~]# yum install redis -y# 修改redis1节点配置文件[root@redis1 ~]# vi /etc/redis.conf# bind 127.0.0.1   # 加上注释protected-mode no  # 修改为nodaemonize yes      # 改为yes# requirepass foobared 下方添加:requirepass "123456"# masterauth <master-password> 下方添加:masterauth "123456"[root@redis1 ~]# systemctl restart redis# 修改redis2、redis3节点[root@redis2 ~]# vi /etc/redis.conf# bind 127.0.0.1 # 加注释protected-mode no # 改为nodaemonize yes # 改为yes# requirepass foobared 下方添加:requirepass "123456"# masterauth <master-password> 下方添加:masterauth "123456"# slaveof <masterip> <masterport> 下方添加:slaveof 192.168.100.115 6379[root@redis2 ~]# systemctl restart redisredis3 同理[root@redis1 ~]# systemctl restart redis             [root@redis1 ~]# redis-cli -a 123456 info replicationconnected_slaves:2[root@redis1 ~]# vi /etc/sentinel.confport 26379protected-mode no  # 保护模式daemonize yessentinel monitor redis1 192.168.100.115 6379 2sentinel auth-pass redis1 123456[root@redis2 ~]# vi /etc/sentinel.confport 26380protected-mode nodaemonize yessentinel monitor redis1 192.168.100.115 6379 2sentinel auth-pass redis1 123456[root@redis3 ~]# vi /etc/sentinel.confport 26381protected-mode nodaemonize yes sentinel monitor redis1 192.168.100.115 6379 2sentinel auth-pass redis1 123456# 启动哨兵模式[root@redis1 ~]# redis-server /etc/sentinel.conf --sentinel[root@redis2 ~]# redis-server /etc/sentinel.conf --sentinel[root@redis3 ~]# redis-server /etc/sentinel.conf --sentinel```### 26、OpenStack Heat运维:创建容器(1分)在自行搭建的OpenStack私有云平台上,在/root目录下编写Heat模板create\_container.yaml,要求执行yaml文件可以创建名为heat-swift的容器。完成后提交控制节点的用户名、密码和IP地址到答题框。(在提交信息前请准备好yaml模板执行的环境)```plain[root@controller ~]# vim create_container.yamlheat_template_version: 2015-10-15resources:  container:    type: OS::Swift::Container    properties:      name: heat-swift[root@controller ~]# openstack stack create -t create_container.yaml test[root@controller ~]# openstack stack delete test -y```### 27、排错:镜像排错(1分)使用赛项提供的error-image镜像启动云主机,flavor使用4vcpu/12G内存/100G硬盘。启动后存在错误的私有云平台,错误现象为查看不到image列表,试根据错误信息排查云平台错误,使云平台可以查询到image信息。完成后提交云主机节点的用户名、密码和IP地址到答题框。```plain[root@error-image ~]# openstack image list                 Failed to discover available identity versions when contacting http://openstack:5000/v3. Attempting to parse version from URL.Unable to establish connection to http://openstack:5000/v3/auth/tokens: HTTPConnectionPool(host='openstack', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f541cde0150>: Failed to establish a new connection: [Errno 111] Connection refused',))[root@error-image ~]# ls /var/lib/glance/images/1443ce33-5fb3-4640-943f-202be8fd3cf6[root@error-image ~]# mysql -uroot -p000000 -e "update glance.images set status= 'active ' where id= '1443ce33-5fb3-4640-943f-202be8fd3cf6';"[root@error-image ~]# mysql -uroot -p000000 -e "update glance.images set deleted= 0 where id= '1443ce33-5fb3-4640-943f-202be8fd3cf6';"[root@error-image ~]# mysql -uroot -p000000 -e "update glance.image_locations set status= 'active' where image_id='1443ce33-5fb3-4640-943f-202be8fd3cf6';" [root@error-image ~]# mysql -uroot -p000000 -e "update glance.image_locations set deleted=0 where image_id='1443ce33-5fb3-4640-943f-202be8fd3cf6';"[root@error-image ~]# openstack image list+--------------------------------------+------+---------+| ID                                   | Name | Status  |+--------------------------------------+------+---------+| 1443ce33-5fb3-4640-943f-202be8fd3cf6 | None | active  |+--------------------------------------+------+---------+```### 28、排错:Keystone排错(1分)使用赛项提供的error-image镜像启动云主机,flavor使用4vcpu/12G内存/100G硬盘。启动后存在错误的私有云平台,错误现象为查看不到image列表,试根据错误信息排查云平台错误,使云平台可以查询到image信息。完成后提交云主机节点的用户名、密码和IP地址到答题框。```plain[root@error-image ~]# vi /etc/keystone/admin-openrc.shexport OS_PROJECT_DOMAIN_NAME=demoexport OS_USER_DOMAIN_NAME=demoexport OS_PROJECT_NAME=service  # 改为serviceexport OS_USERNAME=nova   # 改为novaexport OS_PASSWORD=000000export OS_AUTH_URL=http://openstack:5000/v3export OS_IDENTITY_API_VERSION=3export OS_IMAGE_API_VERSION=2export OS_AUTH_TYPE=password # 添加[root@error-image ~]# source /etc/keystone/admin-openrc.sh [root@error-image ~]# openstack role add --user admin --project admin admin [root@error-image ~]# openstack user set --password 000000 admin[root@error-image ~]# vi /etc/keystone/admin-openrc.shexport OS_PROJECT_DOMAIN_NAME=demoexport OS_USER_DOMAIN_NAME=demoexport OS_PROJECT_NAME=admin      # 改回adminexport OS_USERNAME=admin  # 改回adminexport OS_PASSWORD=000000export OS_AUTH_URL=http://openstack:5000/v3export OS_IDENTITY_API_VERSION=3export OS_IMAGE_API_VERSION=2[root@error-image ~]# source /etc/keystone/admin-openrc.sh [root@error-image ~]# openstack user list+----------------------------------+-----------+| ID                               | Name      |+----------------------------------+-----------+| e170a48a9c714755aebc6a5bf2727c57 | admin     || 223cf2b11186453b8662d727dbb3392b | demo      || a138b0fee7f248b18bb7972515b20ab2 | glance    || 1acf22d853b14a7e87c6e2009803985c | placement || 744373f8cb984cd184c7931f1dd4272c | nova      || d6c9f91fbc744ebf9422607eef160e60 | neutron   || ce2d159ada7648a8a54140ca8bc901a2 | gnocchi   |+----------------------------------+-----------+```### 29、排错:数据库排错(1分)使用赛项提供的error-mysql镜像启动云主机,flavor使用4vcpu/12G内存/100G硬盘。该云主机中存在错误的数据库服务,错误现象为数据库服务无法启动。请将数据库服务修复并启动,将数据库的密码修改为chinaskill123。完成后提交云主机节点的用户名、密码和IP地址到答题框。          
  • [技术干货] KFC私有云
    1.3 配置网络 nmcli是NetworkManager的一个命令行工具,它提供了使用命令行配置由NetworkManager管理网络连接的方法。(1)controller节点# nmcli c m ens160 ipv4.address 192.168.100.10/24# nmcli c m ens160 ipv4.method manual    (修改为静态IP配置,默认是 auto)# nmcli c m ens192 ipv4.address 192.168.200.10/24# nmcli c m ens192 ipv4.method manual# nmcli c m ens192 ipv4.gateway 192.168.200.2# nmcli c m ens192 ipv4.dns 8.8.8.8# nmcli c m ens192 +ipv4.dns 114.114.114.114# nmcli c m ens192 connection.autoconnect yes  (设置开机启动网卡)# nmcli c reload# nmcli c up ens160# nmcli c up ens192(2)compute 节点# nmcli c m ens160 ipv4.address 192.168.100.20/24# nmcli c m ens160 ipv4.method manual# nmcli c m ens192 ipv4.address 192.168.200.20/24# nmcli c m ens192 ipv4.method manual# nmcli c m ens192 ipv4.gateway 192.168.200.2# nmcli c m ens192 ipv4.dns 8.8.8.8# nmcli c m ens192 +ipv4.dns 114.114.114.114# nmcli c m ens192 connection.autoconnect yes# nmcli c reload# nmcli c up ens160# nmcli c up ens1921.4 配置dnf源(1)挂载iso文件【controller】 # mount -o loop ITI_cloud_iaas_v1.0.iso /mnt/# cp -va /mnt/* /opt/# umount /mnt/(2)DNF源备份【controller/compute】# mv /etc/yum.repos.d/* /media/(3)创建repo文件【controller】 # dnf config-manager --add-repo file:///opt/YDY-CLOUD/yoga-repo/# echo gpgcheck=0 >> /etc/yum.repos.d/*.repo【compute】 # dnf config-manager --add-repo ftp://192.168.100.10/YDY-CLOUD/yoga-repo/# sed -i "s#gpgcheck=1#gpgcheck=0#" /etc/dnf/dnf.conf# echo gpgcheck=0 >> /etc/yum.repos.d/*.repo(4)配置防火墙和Selinux【controller】# setenforce 0# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux# systemctl disable --now firewalld.service(5)搭建FTP服务器,设置匿名用户免密访问节点/opt目录【controller】# dnf -y install vsftpd# sed -i '1a\anon_root=\/opt\/' /etc/vsftpd/vsftpd.conf# sed -i 's/anonymous_enable=NO/anonymous_enable=YES/' /etc/vsftpd/vsftpd.conf# systemctl enable --now vsftpd(6)清除缓存,验证yum源【controller/compute】# dnf clean all# dnf makecache2 基于RPM部署本文档基于OpenStack经典的双节点环境进行部署,分别是控制节点(Controller)、计算节点(Compute)。在资源有限的情况下,可以不单独部署存储节点(Storage),把存储节点上的服务(Cinder、Swift、Manila)部署到计算节点即可。2.1 安装私有云平台部署工具包【controller/compute】# dnf -y install iaas-yoga2.2 配置私有云平台环境变量【controller/compute】编辑文件/etc/1cloud/openrc.sh,此文件是安装过程中的各项参数,根据每项参数上一行的说明及服务器实际情况进行配置。# grep -Ev "^$|^#" /etc/1cloud/openrc.sh CONTROLLER_HOST="192.168.100.10"CONTROLLER_NAME="controller"CONTROLLER_PASS="jsydy@2024"COMPUTE_HOST="192.168.100.20"COMPUTE_NAME="compute"COMPUTE_PASS="jsydy@2024"STORAGE_HOST="192.168.100.20"STORAGE_NAME="compute"STORAGE_PASS="jsydy@2024"Network_segment_mask="192.168.100.0/24"INTERFACE_NAME="ens192"Physical_interface="provider"MARIADB_PASS="000000"RABBIT_PASS="000000"ADMIN_PASS="000000"KEYSTONE_DBPASS="000000"GLANCE_PASS="000000"GLANCE_DBPASS="000000"NOVA_DBPASS="000000"NOVA_PASS="000000"PLACEMENT_PASS="000000"PLACEMENT_DBPASS="000000"NEUTRON_PASS="000000"NEUTRON_DBPASS="000000"METADATA_SECRET="yoga"PROMETHEUS_VERSION="2.43.0"PROMETHEUS_PORT="9091"NODE_EXPORTER_VERSION="1.5.0"MEMCACHED_EXPORTER_VERSION="0.11.2"OPENSTACK_EXPORTER_VERSION="1.6.0"SKYLINE_DBPASS="000000"SKYLINE_SERVICE_PASS="000000"CINDER_DBPASS="000000"CINDER_PASS="000000"CINDER_DISK="nvme0n2p1"CINDER_VG="cinder-volumes"SWIFT_PASS="000000"SWIFT_DISK="nvme0n2p2"HEAT_DBPASS="000000"HEAT_PASS="000000"HEAT_DOMAIN_PASS="000000"TROVE_DBPASS="000000"TROVR_PASS="000000"AODH_DBPASS="000000"AODH_PASS="000000"OCTAVIA_PASS="000000"OCTAVIA_DBPASS="000000"REDIS_PASS="000000"GNOCCHI_DBPASS="000000"GNOCCHI_PASS="000000"CEILOMETER_PASS="000000"RALLY_DBPASS="000000"RALLY_PASS="000000"    CYBORG_DBPASS="000000"CYBORG_PASS="000000"2.3 通过脚本初始化系统环境【controller/compute】 # iaas-install-pre-cloud.sh2.4 通过脚本安装数据库服务【controller】 # iaas-install-mariadb.sh2.5 通过脚本安装Keystone服务【controller】  # iaas-install-keystone.sh2.6 通过脚本安装Glance服务【controller】 # iaas-install-glance.sh2.6.1 创建镜像# cd /opt/YDY-CLOUD/images/# xz -d openEuler-22.09-x86_64.qcow2.xz# openstack image create --disk-format qcow2 --progress --file openEuler-22.09-x86_64.qcow2 openEuler-22.09# openstack image create --disk-format qcow2 --progress --file noble-server-cloudimg-amd64.img Ubuntu-24.042.7 通过脚本安装Nova服务【controller】 # iaas-install-nova-controller.sh【compute】 # iaas-install-nova-compute.sh2.8 通过脚本安装Neutron服务【controller】 # iaas-install-neutron-controller-openvswitch.sh【compute】 # iaas-install-neutron-compute-openvswitch.sh2.9 通过脚本安装Horizon服务【controller】 # iaas-install-horizon.sh
  • [技术干货] 一键部署k8s最终版2
    模块二 容器云(30 分)任务 1 容器云服务搭建(5 分)2.1.1 部署容器云平台使用 OpenStack 私有云平台创建两台云主机,分别作为 Kubernetes 集群的 master 节点和 node 节点,然后完成 Kubernetes 集群的部署,并完成 Istio 服务网 格、KubeVirt 虚拟化和 Harbor 镜像仓库的部署。创建俩台云主机并配网# Kubernetes 集群的部署[root@localhost ~]# mount -o loop chinaskills_cloud_paas_v2.0.2.iso /mnt/[root@localhost ~]# cp -rfv /mnt/* /opt/[root@localhost ~]# umount /mnt/[root@master ~]# hostnamectl set-hostname master && su[root@worker ~]# hostnamectl set-hostname worker && su# 安装kubeeasy[root@master ~]# mv /opt/kubeeasy /usr/bin/kubeeasy# 安装依赖环境[root@master ~]# kubeeasy install depend \--host 192.168.59.200,192.168.59.201 \--user root \--password 000000 \--offline-file /opt/dependencies/base-rpms.tar.gz# 安装k8s[root@master ~]# kubeeasy install k8s \--master 192.168.59.200 \--worker 192.168.59.201 \--user root \--password 000000 \--offline-file /opt/kubernetes.tar.gz# 安装istio网络[root@master ~]# kubeeasy add --istio istio # 安装kubevirt虚拟化[root@master ~]# kubeeasy add --virt kubevirt# 安装harbor仓库[root@master ~]# kubeeasy add --registry harbor[root@k8s-master-node1 ~]# vim pod.yamlapiVersion: v1kind: Podmetadata:  name: examspec:  containers:  - name: exam    image: nginx:latest    imagePullPolicy: IfNotPresent    env:    - name: exam      value: "2022"[root@k8s-master-node1 ~]# kubectl apply -f pod.yaml[root@k8s-master-node1 ~]# kubectl get pod#部署 Istio 服务网格[root@k8s-master-node1 ~]# kubectl create ns examnamespace/exam created[root@k8s-master-node1 ~]# kubectl edit ns exam更改为:  labels:    istio-injection: enabled[root@k8s-master-node1 ~]# kubectl describe ns exam  #查看任务 2 容器云服务运维(15 分)2.2.1 容器化部署 MariaDB编写 Dockerfile 文件构建 mysql 镜像,要求基于 centos 完成 MariaDB 数据 库的安装与配置,并设置服务开机自启。上传Hyperf.tar.gz包[root@k8s-master-node1 ~]#tar -zxvf Hyperf.tar.gz &&cd hyperf[root@k8s-master-node1 hyperf]#vim local.repo[yum]name=yumbaseurl=file:///root/yumgpgcheck=0enabled=1[root@k8s-master-node1 hyperf]#vim mysql_init.sh#!/bin/bashmysql_install_db --user=rootmysqld_safe --user=root & sleep 8;mysqladmin -u root password 'root'mysql -uroot -proot -e "grant all on *.* to 'root'@'%' identified by 'root'; flush privileges;"mysql -uroot -proot -e " create database jsh_erp;use jsh_erp;source /opt/hyperf_admin.sql;"[root@k8s-master-node1 hyperf]# vim Dockerfile-mariadbFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*COPY local.repo /etc/yum.repos.d/COPY yum /root/yumRUN yum install mariadb-server -yCOPY hyperf_admin.sql /opt/COPY mysql_init.sh /opt/RUN bash /opt/mysql_init.shEXPOSE 3306CMD ["mysqld_safe","--user=root"][root@k8s-master-node1 hyperf]# docker build -t hyperf-mariadb:v1.0 -f Dockerfile-mariadb .2.2.2 容器化部署 Redis编写 Dockerfile 文件构建 redis 镜像,要求基于 centos 完成 Redis 服务的安 装和配置,并设置服务开机自启。[root@k8s-master-node1 hyperf]# vim Dockerfile-redis FROM centos:7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD local.repo /etc/yum.repos.d/ADD yum /root/yumRUN yum install redis -yRUN sed -i 's/bind 127.0.0.1/bind 0.0.0.0/g' /etc/redis.confRUN sed -i 's/protected-mode yes/proteceted-mode no/g' /etc/redis.confEXPOSE 6379CMD ["redis-server","/etc/redis.conf"]   [root@k8s-master-node1 hyperf]# docker build -t hyperf-redis:v1.0 -f Dockerfile-redis .2.2.3 容器化部署 Nginx编写 Dockerfile 文件构建 nginx 镜像,要求基于 centos 完成 Nginx 服务的安 装和配置,并设置服务开机自启。[root@k8s-master-node1 hyperf]# vim Dockerfile-nginxFROM centos:7.9.2009MAINTAINER chinaskillsRUN rm -rf /etc/yum.repos.d/*ADD local.repo /etc/yum.repos.d/ADD yum /root/yumRUN yum install nginx -yRUN /bin/bash -c 'echo init ok'EXPOSE 80CMD ["nginx","-g","daemon off;"][root@k8s-master-node1 hyperf]# docker build -t hyperf-nginx:v1.0 -f Dockerfile-nginx .2.2.4 容器化部署 Explorer编写 Dockerfile 文件构建 explorer 镜像,要求基于 centos 完成 PHP 和 HTTP 环境的安装和配置,并设置服务开机自启。上传Explorer.tar.gz包[root@k8s-master-node1 ~]#tar -zxvf Explorer.tar.gz & cd KodExplorer/[root@k8s-master-node1 KodExplorer/]#vim local.repo[yum]name=yumbaseurl=file:///root/yumgpgcheck=0enabled=1[root@k8s-master-node1 KodExplorer/]#vim mysql_init.sh#!/bin/bashmysql_install_db --user=rootmysqld_safe --user=root & sleep 8;mysqladmin -u root password 'root'mysql -uroot -proot -e "grant all on *.* to 'root'@'%' identified by 'root'; flush privileges;"[root@k8s-master-node1 KodExplorer/]#vim Dockerfile-mariadbFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*COPY local.repo /etc/yum.repos.d/COPY yum /root/yumRUN yum install mariadb mariadb-server -yCOPY mysql_init.sh /opt/RUN bash /opt/mysql_init.shEXPOSE 3306CMD ["mysqld_safe","--user=root"][root@k8s-master-node1 KodExplorer/]# docker build -t kod-mysql:v1.0 -f Dockerfile-mariadb .[root@k8s-master-node1 KodExplorer/]# vim Dockerfile-redis FROM centos:7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD local.repo /etc/yum.repos.d/ADD yum /root/yumRUN yum install redis -yRUN sed -i 's/bind 127.0.0.1/bind 0.0.0.0/g' /etc/redis.confRUN sed -i 's/protected-mode yes/proteceted-mode no/g' /etc/redis.confEXPOSE 6379CMD ["redis-server","/etc/redis.conf"] [root@k8s-master-node1 KodExplorer/]# docker build -t kod-redis:v1.0 -f Dockerfile-redis .[root@k8s-master-node1 KodExplorer/]# vim Dockerfile-nginxFROM centos:7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD local.repo /etc/yum.repos.d/ADD yum /root/yumRUN yum install nginx -yRUN /bin/bash -c 'echo init ok'EXPOSE 80CMD ["nginx","-g","daemon off"][root@k8s-master-node1 KodExplorer/]# docker build -t kod-nginx:v1.0 -f Dockerfile-nginx .[root@k8s-master-node1 KodExplorer]# vim Dockerfile-phpFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*COPY local.repo /etc/yum.repos.d/COPY yum /root/yumRUN yum install httpd php php-cli unzip php-gd php-mbstring -yWORKDIR /var/www/htmlCOPY php/kodexplorer4.37.zip .RUN unzip kodexplorer4.37.zipRUN chmod -R 777 /var/www/htmlRUN sed -i 's/#ServerName www.example.com:80/ServerName localhost:80/g' /etc/httpd/conf/httpd.conf EXPOSE 80CMD ["/usr/sbin/httpd","-D","FOREGROUND"]RUN yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-develEXPOSE 9999CMD java -jar /root/app.jarSSS[root@k8s-master-node1 KodExplorer]#docker build -t kod-php:v1.0 -f Dockerfile-php .2.2.5 编排部署 Explorer编写 docker-compose.yaml 文件,要求使用镜像 mysql、redis、nginx 和 explorer 完成 Explorer 管理系统的编排部署。[root@k8s-master-node1 KodExplorer]# vi docker-compose.yamlversion: '3.2'services:                mysql:             container_name: mysql             image: kod-mysql:v1.0             volumes:                    - ./data/mysql:/var/lib/mysql                    - ./mysql/logs:/var/lib/mysql-logs             ports:                    - "3306:3306"             restart: always        redis:             container_name: redis             image:  kod-redis:v1.0             ports:                     - "6379:6379"             volumes:                     - ./data/redis:/data                     - ./redis/redis.conf:/usr/local/etc/redis/redis.conf             restart: always             command: redis-server /usr/local/etc/redis/redis.confnginx:             container_name: nginx             image: kod-nginx:v1.0             volumes:                    - ./www:/data/www                    - ./nginx/logs:/var/log/nginx             ports:                    - "443:443"             restart: always             depends_on:                    - php-fpm             links:                    - php-fpm             tty: true         php-fpm:             container_name: php-fpm             image: kod-php:v1.0             ports:                     - "8090:80"             links:                     - mysql                     - redis             restart: always             depends_on:                     - redis                     - mysql[root@k8s-master-node1 KodExplorer/]#docker-compose up -d    #如果容器已经存在用docker stop id和docker rm id 来删除已存在容器  #查看服务[root@k8s-master-node1 KodExplorer/]#docker-compose ps在浏览器上通过http://IP:8090访问KodExplorer账号admin2.2.6 安装 GitLab 环境 新建命名空间 kube-ops,将 GitLab 部署到该命名空间下,然后完成 GitLab 服务的配置。上传CICD-Runner.tar.gz包[root@k8s-master-node1 ~]#tar -zxvf CICD-Runner.tar.gz[root@k8s-master-node1 ~]#cd cicd-runner/[root@k8s-master-node1 cicd-runner]# docker load -i images/image.tar[root@k8s-master-node1 cicd-runner]# kubectl create ns kube-opsnamespace/kube-ops created[root@k8s-master-node1 cicd-runner]# kubectl create deployment gitlab -n kube-ops --image=yidaoyun/gitlab-ce:v1.0 --port 80 --dry-run -o yaml > gitlab.yaml[root@k8s-master-node1 cicd-runner]# vim gitlab.yamlapiVersion: apps/v1kind: Deploymentmetadata:  creationTimestamp: null  labels:    app: gitlab  name: gitlab  namespace: kube-opsspec:  replicas: 1  selector:    matchLabels:      app: gitlab  #strategy: {}  template:    metadata:      #creationTimestamp: null      labels:        app: gitlab    spec:      containers:      - image: yidaoyun/gitlab-ce:v1.0        imagePullPolicy: IfNotPresent        name: gitlab-ce        ports:        - containerPort: 80        env:        - name: GITLAB_ROOT_PASSWORD          value: 'admin123456'[root@k8s-master-node1 cicd-runner]# kubectl apply -f gitlab.yaml deployment.apps/gitlab created[root@k8s-master-node1 cicd-runner]# kubectl get pod -n kube-ops NAME                     READY   STATUS    RESTARTS   AGEgitlab-df897d46d-vcjf6   1/1     Running   0          7s[root@k8s-master-node1 cicd-runner]# kubectl expose deployment gitlab -n kube-ops gitlab --port 80 --target-port 30880 --type NodePort --dry-run -o yaml >> gitlab.yaml [root@k8s-master-node1 cicd-runner]# vim gitlab.yamlapiVersion: v1kind: Servicemetadata:  creationTimestamp: null  labels:    app: gitlab  name: gitlab  namespace: kube-opsspec:  ports:  - port: 80    protocol: TCP    nodePort: 30880  selector:    app: gitlab  type: NodePort[root@k8s-master-node1 cicd-runner]# kubectl apply -f gitlab.yaml service/gitlab created[root@k8s-master-node1 cicd-runner]# kubectl get svc -n kube-ops NAME     TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGEgitlab   NodePort   10.96.133.116   <none>        80:30880/TCP   14s2.2.7 部署 GitLab Runner(x)将 GitLab Runner 部署到 kube-ops 命名空间下,并完成 GitLab Runner 在 GitLab 中的注册。百度打开192.168.59.200:30880root admin123456 #在这里获取部署runner的URL和令牌48XdJ5KYGoJPYjaa71gi [root@k8s-master-node1 cicd-runner]# cd manifests/[root@k8s-master-node1 manifests]# vim runner-configmap.yamlapiVersion: v1data:  REGISTER_NON_INTERACTIVE: "true"  REGISTER_LOCKED: "false"  METRICS_SERVER: "0.0.0.0:9100"  CI_SERVER_URL: "http://192.168.59.200:30880"  RUNNER_REQUEST_CONCURRENCY: "4"   RUNNER_EXECUTOR: "kubernetes"  KUBERNETES_NAMESPACE: "kube-ops"  KUBERNETES_PRIVILEGED: "true"  KUBERNETES_CPU_LIMIT: "1"[root@k8s-master-node1 manifests]#echo -n "48XdJ5KYGoJPYjaa71gi" | base64NDhYZEo1S1lHb0pQWWphYTcxZ2k=[root@k8s-master-node1 manifests]# kubectl create secret generic gitlab-ci-runner -n kube-ops --from-literal=NDhYZEo1S1lHb0pQWWphYTcxZ2k= --dry-run -o yaml > runner-statefulset.yaml# 进入添加labels字段即可[root@k8s-master-node1 manifests]# vim runner-statefulset.yamlapiVersion: v1data:  GITLAB_CI_TOKEN: NDhYZEo1S1lHb0pQWWphYTcxZ2k=kind: Secretmetadata:  name: gitlab-ci-runner  namespace: kube-ops  labels:    app: gitlab-ci-runner---apiVersion: apps/v1kind: StatefulSetmetadata:  name: gitlab-ci-runner  namespace: kube-ops  labels:    app: gitlab-ci-runnerspec:  serviceName: gitlab-ci-runner  updateStrategy:    type: RollingUpdate  replicas: 2  selector:    matchLabels:      app: gitlab-ci-runner  template:    metadata:      labels:        app: gitlab-ci-runner    spec:      securityContext:        runAsNonRoot: true # 则容器会以非 root 用户身份运行        runAsUser: 999        supplementalGroups: [999]      containers:      - image: yidaoyun/gitlab-runner:v1.0        imagePullPolicy: IfNotPresent        name: gitlab-runner        ports:        - containerPort: 9100        command:        - /scripts/run.sh        envFrom:        - configMapRef:            name: gitlab-ci-runner-cm        - secretRef:            name: gitlab-ci-token        env:        - name: RUNNER_NAME          valueFrom:            fieldRef:              fieldPath: metadata.name        volumeMounts:        - name: gitlab-ci-runner-scripts          mountPath: /scripts          readOnly: true # 将卷只读挂载到容器内      volumes:      - name: gitlab-ci-runner-scripts        projected:          sources:          - configMap:              name: gitlab-ci-runner-scripts              items:              - key: run.sh                path: run.sh                mode: 0775      restartPolicy: Always # 依次启动[root@k8s-master-node1 manifests]# kubectl apply -f runner-configmap.yaml configmap/gitlab-ci-runner-cm created[root@k8s-master-node1 manifests]# kubectl apply -f runner-scripts-configmap.yamlconfigmap/gitlab-ci-runner-scripts created[root@k8s-master-node1 manifests]# kubectl apply -f runner-statefulset.yamlsecret/gitlab-ci-token createdstatefulset.apps/gitlab-ci-runner created[root@k8s-master-node1 manifests]# kubectl get pod -n kube-opsNAME                     READY   STATUS    RESTARTS   AGEgitlab-ci-runner-0       1/1     Running   0          14sgitlab-ci-runner-1       1/1     Running   0          12sgitlab-df897d46d-vcjf6   1/1     Running   0          16h2.2.8 配置 GitLab 在 GitLab 中新建公开项目并导入离线项目包,然后将 Kubernetes 集群添加 到 GitLab 中。 [root@k8s-master-node1 cicd-runner]# cd springcloud/[root@k8s-master-node1 springcloud]# git config --global user.name "Administrator"[root@k8s-master-node1 springcloud]# git config --global user.email "admin@example.com"[root@k8s-master-node1 springcloud]# git remote remove origin[root@k8s-master-node1 springcloud]# git remote add origin http://192.168.59.200:30880/root/springcloud.git[root@k8s-master-node1 springcloud]# git add .warning: You ran 'git add' with neither '-A (--all)' or '--ignore-removal',whose behaviour will change in Git 2.0 with respect to paths you removed.Paths like '.gitlab-ci.yml' that areremoved from your working tree are ignored with this version of Git.* 'git add --ignore-removal <pathspec>', which is the current default,  ignores paths you removed from your working tree.* 'git add --all <pathspec>' will let you also record the removals.Run 'git status' to check the paths you removed from your working tree.[root@k8s-master-node1 springcloud]# git commit -m "Initial commit"[master db17cb0] Initial commit 1 file changed, 2 insertions(+)[root@k8s-master-node1 springcloud]# git push -u origin masterUsername for 'http://10.24.206.143:30880': root  # gitlab用户Password for 'http://root@10.24.206.143:30880':(admin123456)  # gitlab密码Counting objects: 1355, done.Delta compression using up to 4 threads.Compressing objects: 100% (1000/1000), done.Writing objects: 100% (1355/1355), 4.05 MiB | 0 bytes/s, done.Total 1355 (delta 269), reused 1348 (delta 266)remote: Resolving deltas: 100% (269/269), done.To http://10.24.206.143:30880/root/springcloud.git * [new branch]      master -> masterBranch master set up to track remote branch master from origin.    # 获取CA证书[root@k8s-master-node1 springcloud]# cat /etc/kubernetes/pki/ca.crt-----BEGIN CERTIFICATE-----MIIC/jCCAeagAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJlcm5ldGVzMB4XDTI0MDIyNTAzNDAxNloXDTM0MDIyMjAzNDAxNlowFTETMBEGA1UEAxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJUJPs6NpgvLNhrdFFyAO3P8IRwNJM25ijz3rtdO46a1dXXsWN6nVBzmXcYr7QkK9l1V/X5o8dxS46LXVbwO5gtOtO6Zu0NO55msTVw+HEHoPj2fh9s1tN4WCmtaCzHLz7Cgw90ze4/SdVx60t58xjzo9vEr6lCb3A39Qqh7DUCyu6J9XuhsjdCx+nPZv6rrKqm1Fnq4bx4zc4WAfoT4pQ21EQnlLfzKsI34FZjEFXKYSZn+94XXouY5E3Z+DSp9QJOf/FJtQ5w5f+/58U5s1ja/iEnBOUupn+f5oKbzZHJbk5prPA+vzOce8hQ4+LUcnoJkfxrTK6KHi5UMQQOtjTMCAwEAAaNZMFcwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFHY2aLho7/Eab1m6sJgCq4l/fwDfMBUGA1UdEQQOMAyCCmt1YmVybmV0ZXMwDQYJKoZIhvcNAQELBQADggEBAG91Daj4DylMJPkF1kbaQsbC6w45gI0A8wqL5dF4Y6FyPNMwUO28t8WBvcsiZ34u5Z67bDx9joYme/0kf/0kD5w1uBewNt0ronpeTYDsOq+yILRyY5XEY3CdKTXzkst0BkMjttfTHKHOfDy+/OmpeDtIKopp/BcyRYEQih7Givp1ITqhBQQm8kp6TAU2m0QrtlhebN6349LGOz2CxoQsp0YikqnEoFjaFSvn40vI6ttdek3cyQAEoNTTQ+zwz80IXCt3ODk1qBYRZdc10aXLszNtZ0MN2vbKRsJjmvihWBEmjO58DyV/H2ebXMKStBzbK5v4mjKW1Jg9ilra6fGSH8I=-----END CERTIFICATE-----# 获取令牌[root@k8s-master-node1 springcloud]# kubectl describe secrets -n kube-system default-token-h8h7n Name:         default-token-tgz8rNamespace:    kube-systemLabels:       <none>Annotations:  kubernetes.io/service-account.name: default              kubernetes.io/service-account.uid: d4111b82-49c8-481b-83ff-ff2619eb3d1b Type:  kubernetes.io/service-account-token Data====ca.crt:     1099 bytesnamespace:  11 bytestoken:      eyJhbGciOiJSUzI1NiIsImtpZCI6IjN0Z3RzNDdfT3FGc0pHalJKWi1ZcHZ5TTF4cDB6X2duLWxhanViVkJXLVUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLXRnejhyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkNDExMWI4Mi00OWM4LTQ4MWItODNmZi1mZjI2MTllYjNkMWIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06ZGVmYXVsdCJ9.RfWmtTwtY-WOXIibVOOsRjcvJRktI9O0pFOpR-VtjfJKVAuwwjxinQC8LaGvFZK9kooTvf1GKA261awk45uj-hZjN7T2rK9glea-D8YqwFRR5y7G6uU_SCqho2h1qC6T6ax30XCMuVgWe5RuvG0rXB1qnT72vy72K2iSCb9M7SuuqI-kElvf5M1l0zmrvN9xCvKebVtwt2hIuMAJW2fgNhiEMmHaXPVmVUYr_G5jrtP73HoDclGC2i2elJAySJXek7pxyzmaOlP7jWXYhaXjiU5BvX_PSUfLSt2PVpOEANNUyBowfZkOhIyoc0QQSd7-Wi0gx3Sd9hMwH7LXHRmt-w- 将获取的信息分别填入 2.2.9 构建 CI/CD 在项目中编写流水线脚本,然后触发自动构建,要求成构建代码、构建镜 像、推送镜像 Harbor、并发布服务到 Kubernetes 集群。  将tcp://localhost:2375改为tcp://docker-dind:2375[root@k8s-master-node1 springcloud]# kubectl edit -n kube-system cm coredns # 53后面添加一个gitlab# 添加映射[root@k8s-master-node1 ~]# cat /etc/hosts192.168.100.23 apiserver.cluster.local # 选择这一行# 登录harbor仓库[root@k8s-master-node1 springcloud]# docker login 192.168.59.200Username: adminPassword: (Harbor12345)WARNING! Your password will be stored unencrypted in /root/.docker/config.json.Configure a credential helper to remove this warning. See[root@k8s-master-node1 springcloud]# cd ..[root@k8s-master-node1 cicd-runner]# vim DockerfileFROM nginx:latestRUN echo "Hello Golang In Gitlab CI,go1.10.3,/bin/app" >> /usr/share/nginx/html/index.html[root@k8s-master-node1 cicd-runner]# docker build -t 10.24.206.143/library/springcloud:master -f Dockerfile .Sending build context to Docker daemon  2.892GBStep 1/2 : FROM nginx:latest ---> de2543b9436bStep 2/2 : RUN echo "Hello Golang In Gitlab CI,go1.10.3,/bin/app" >> /usr/share/nginx/html/index.html ---> Running in a5b69ead6f7fRemoving intermediate container a5b69ead6f7f ---> 193d60448c3dSuccessfully built 193d60448c3dSuccessfully tagged 10.24.206.143/library/springcloud:master[root@k8s-master-node1 cicd-runner]# docker push 10.24.206.143/library/springcloud:master The push refers to repository [10.24.206.143/library/springcloud]09c5777979b4: Pushed a059c9abe376: Pushed 09be960dcde4: Pushed 18be1897f940: Pushed dfe7577521f0: Pushed d253f69cb991: Pushed fd95118eade9: Pushed master: digest: sha256:95218b2f4822bdbe6f937c74b3fe7879998385cd04d74c241e5706294239ee29 size: 177[root@k8s-master-node1 cicd-runner]# kubectl create ns gitlabnamespace/gitlab created# 使用刚刚生成的镜像[root@k8s-master-node1 cicd-runner]# vim deploymeng.yamlapiVersion: apps/v1kind: Deploymentmetadata:  creationTimestamp: null  labels:    app: gitlab-k8s-demo-dev  name: gitlab-k8s-demo-dev  namespace: gitlabspec:  replicas: 2  selector:    matchLabels:      app: gitlab-k8s-demo-dev  strategy: {}  template:    metadata:      creationTimestamp: null      labels:        app: gitlab-k8s-demo-dev    spec:      containers:      - image: 10.24.206.143/library/springcloud:master        name: springcloud        imagePullPolicy: IfNotPresent        ports:        - containerPort: 80---apiVersion: v1kind: Servicemetadata:  name: gitlab-k8s-demo-dev  namespace: gitlabspec:  ports:  - port: 80    nodePort: 30800  selector:    app: gitlab-k8s-demo-dev  type: NodePort[root@k8s-master-node1 cicd-runner]# kubectl apply -f deploymeng.yaml deployment.apps/gitlab-k8s-demo-dev createdservice/gitlab-k8s-demo-dev configured[root@k8s-master-node1 cicd-runner]# kubectl get deployments.apps -n gitlab NAME                  READY   UP-TO-DATE   AVAILABLE   AGEgitlab-k8s-demo-dev   2/2     2            2           2m11s[root@k8s-master-node1 cicd-runner]# kubectl get pod -n gitlab NAME                                   READY   STATUS    RESTARTS   AGEgitlab-k8s-demo-dev-76c8494bdd-hcwwd   1/1     Running   0          101sgitlab-k8s-demo-dev-76c8494bdd-hfm2n   1/1     Running   0          101s[root@k8s-master-node1 cicd-runner]# kubectl get svc -n gitlab NAME                  TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGEgitlab-k8s-demo-dev   NodePort   10.96.99.185   <none>        80:30800/TCP   31m2.2.10 服务网格:路由管理 将 Bookinfo 应用部署到 default 命名空间下,应用默认请求路由,将所有流 量路由到各个微服务的 v1 版本。然后更改请求路由 reviews,将指定比例的流量 从 reviews 的 v1 转移到 v3。[root@k8s-master-node1 ServiceMesh]# vim route.yamlapiVersion: networking.istio.io/v1alpha3kind: VirtualServicemetadata:  name: reviews-routespec:  hosts: # 将流量路由到指定主机  - reviews  http:  - name: "v1"    route: # 定义路由转发目的地列表(所有http流量都会被路由到标签为version:v1的reviews服务上)    - destination:        host: reviews        subset: v1  - name: "v2"    match:    - uri:        prefix: "/wpcatalog"    - uri:        prefix: "/consumercatalog"    rewrite: # 定义重写HTTP URL 或 Authority headers,不能与重定向同时配置,重写操作会在转发前执行      uri: "/newcatalog"    route:    - destination:        host: reviews        subset: v2[root@k8s-master-node1 ServiceMesh]# kubectl apply -f route.yamlvirtualservice.networking.istio.io/reviews-route created 【检测命令1】[0.5分]kubectl get virtualservice reviews-route -o jsonpath={.spec.http[*].match}【判分标准】"uri":{"prefix":"/wpcatalog"} || "uri":{"prefix":"/consumercatalog"}【检测命令2】[0.5分]kubectl get virtualservice reviews-route -o jsonpath={.spec.http[*].rewrite}【判分标准】{"uri":"/newcatalog"}2.2.11 KubeVirt 运维:VMI 管理 将提供的镜像在default命名空间下创建一台VMI,名称为exam,使用Service 对外暴露 VMI。[root@k8s-master-node1 ~]# vim Dockerfile FROM scratchADD exam.qcow2 /disk/[root@k8s-master-node1 ~]# docker build -t exam:v1.0 -f Dockerfile . [root@k8s-master-node1 ~]# vim exam.yaml apiVersion: kubevirt.io/v1kind: VirtualMachineInstancemetadata:  name: exam  labels:    app: examspec:  domain:    devices:      disks:      - name: containerdisk        disk:          bus: virtio      - name: cloudinitnodisk        disk:          bus: virtio    resources:      requests:        memory: 512Mi  volumes:    - name: containerdisk      containerDisk:        image: exam:v1.0        imagePullPolicy: IfNotPresent    - name: cloudinitnodisk      cloudInitNoCloud:        userData: |-          hostname: exam---apiVersion: v1kind: Servicemetadata:  creationTimestamp: null  name: exam  labels:    app: examspec:  ports:  - name: 80-80    port: 80    nodePort: 30082 # 节点端口    protocol: TCP    targetPort: 80 # 目标端口  selector:    app: exam  type: NodePort[root@k8s-master-node1 ~]# kubectl apply -f exam.yamlvirtualmachineinstance.kubevirt.io/exam createdservice/exam created[root@k8s-master-node1 ~]# kubectl get vmiNAME   AGE   PHASE     IP            NODENAME           READYexam   60s   Running   10.244.0.50   k8s-master-node1   True
  • [技术干货] 一键部署k8s最终版
    1. 控制节点主机名为 controller,设置计算节点主机名为 compute;[root@controller ~]# hostnamectl set-hostname controller && su[root@compute ~]# hostnamectl set-hostname compute && su2.hosts 文件将 IP 地址映射为主机名。[root@controller&&compute ~]#vi /etc/hosts192.168.100.10 controller192.168.100.20 compute[root@controller&&compute ~]#vi /etc/selinux/config更改SELINUX=disabled[root@controller&&compute ~]#setenforce 0[root@controller&&compute ~]#systemctl stop firewalld && systemctl disable firewalld3.配置 yum 源[root@controller ~]rm -rf /etc/yum.repos.d/*[root@controller&&compute ~]# vi /etc/yum.repos.d/http.repo[centos]name=centosbaseurl=http://192.168.133.130/centosgpgcheck=0enabled=1 [openstack]name=openstackbaseurl=http://192.168.133.130/openstack/iaas-repogpgcheck=0enabled=1[root@controller&&compute ~]#yum clean all && yum repolist && yum makecache 2.1.1 部署容器云平台使用 OpenStack 私有云平台创建两台云主机,分别作为 Kubernetes 集群的 master 节点和 node 节点,然后完成 Kubernetes 集群的部署,并完成 Istio 服务网 格、KubeVirt 虚拟化和 Harbor 镜像仓库的部署。创建俩台云主机并配网# Kubernetes 集群的部署[root@localhost ~]# mount -o loop chinaskills_cloud_paas_v2.0.2.iso /mnt/[root@localhost ~]# cp -rfv /mnt/* /opt/[root@localhost ~]# umount /mnt/[root@master ~]# hostnamectl set-hostname master && su[root@worker ~]# hostnamectl set-hostname worker && su# 安装kubeeasy[root@master ~]# mv /opt/kubeeasy /usr/bin/kubeeasy# 安装依赖环境[root@master ~]# kubeeasy install depend \--host 192.168.59.200,192.168.59.201 \--user root \--password 000000 \--offline-file /opt/dependencies/base-rpms.tar.gz# 安装k8s[root@master ~]# kubeeasy install k8s \--master 192.168.59.200 \--worker 192.168.59.201 \--user root \--password 000000 \--offline-file /opt/kubernetes.tar.gz# 安装istio网格[root@master ~]# kubeeasy add --istio istio# 安装kubevirt虚拟化[root@master ~]# kubeeasy add --virt kubevirt# 安装harbor仓库[root@master ~]# kubeeasy add --registry harbor[root@k8s-master-node1 ~]# vim pod.yamlapiVersion: v1kind: Podmetadata:  name: examspec:  containers:  - name: exam    image: nginx:latest    imagePullPolicy: IfNotPresent    env:    - name: exam      value: "2022"[root@k8s-master-node1 ~]# kubectl apply -f pod.yaml[root@k8s-master-node1 ~]# kubectl get pod#部署 Istio 服务网格[root@k8s-master-node1 ~]# kubectl create ns examnamespace/exam created[root@k8s-master-node1 ~]# kubectl edit ns exam更改为:  labels:    istio-injection: enabled[root@k8s-master-node1 ~]# kubectl describe ns exam  #查看任务 2 容器云服务运维(15 分)2.2.1 容器化部署 Node-Exporter编写 Dockerfile 文件构建 exporter 镜像,要求基于 centos 完成 Node-Exporter 服务的安装与配置,并设置服务开机自启。上传Hyperf.tar包[root@k8s-master-node1 ~]#tar -zxvf Hyperf.tar.gz[root@k8s-master-node1 ~]#cd hyperf/[root@k8s-master-node1 hyperf]#docker load -i centos_7.9.2009.tar上传node_exporter-1.7.0.linux-amd64.tar包[root@k8s-master-node1 hyperf]#vim Dockerfile-exporterFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD node_exporter-1.7.0.linux-amd64.tar.gz /root/EXPOSE 9100ENTRYPOINT ["./root/node_exporter-1.7.0.linux-amd64/node_exporter"][root@k8s-master-node1 hyperf]#docker build -t monitor-exporter:v1.0 -f Dockerfile-exporter .2.2.2 容器化部署Alertmanager编写 Dockerfile 文件构建 alert 镜像,要求基于 centos:latest 完成 Alertmanager 服务的安装与配置,并设置服务开机自启。上传alertmanager-0.26.0.linux-amd64.tar包[root@k8s-master-node1 hyperf]#vim Dockerfile-alertFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD alertmanager-0.26.0.linux-amd64.tar.gz /root/EXPOSE 9093 9094ENTRYPOINT ["./root/alertmanager-0.26.0.linux-amd64/alertmanager","--config.file","/root/alertmanager-0.26.0.linux-amd64/alertmanager.yml"][root@k8s-master-node1 hyperf]#docker build -t monitor-alert:v1.0 -f Dockerfile-alert .2.2.3 容器化部署 Grafana编写 Dockerfile 文件构建 grafana 镜像,要求基于 centos 完成 Grafana 服务 的安装与配置,并设置服务开机自启。上传grafana-6.4.1.linux-amd64.tar.gz包[root@k8s-master-node1 hyperf]#vim Dockerfile-grafanaFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD grafana-6.4.1.linux-amd64.tar.gz /root/EXPOSE 3000ENTRYPOINT ["./root/grafana-6.4.1/bin/grafana-server","-homepath","/root/grafana-6.4.1/"][root@k8s-master-node1 hyperf]#docker build -t monitor-grafana:v1.0 -f Dockerfile-grafana .[root@k8s-master-node1 hyperf]#docker run -d --name grafana-exam-jiance monitor-grafana:v1.0 && sleep 5 && docker exec grafana-exam-jiance ps -aux && docker rm -f grafana-exam-jiance2.2.4 容器化部署 Prometheus 编写 Dockerfile 文件构建 prometheus 镜像,要求基于 centos 完成 Promethues 服务的安装与配置,并设置服务开机自启。上传prometheus-2.13.0.linux-amd64.tar.gz并解压[root@k8s-master-node1 hyperf]#tar -zxvf prometheus-2.13.0.linux-amd64.tar.gz[root@k8s-master-node1 hyperf]#mv prometheus-2.13.0.linux-amd64/prometheus.yml /root/hyperf && rm -rf prometheus-2.13.0.linux-amd64[root@k8s-master-node1 hyperf]#vim Dockerfile-prometheusFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD prometheus-2.13.0.linux-amd64.tar.gz /root/RUN mkdir -p /data/prometheus/COPY prometheus.yml /data/prometheus/EXPOSE 9090ENTRYPOINT ["./root/prometheus-2.13.0.linux-amd64/prometheus","--config.file","/data/prometheus/prometheus.yml"][root@k8s-master-node1 hyperf]#docker build -t monitor-prometheus:v1.0 -f Dockerfile-prometheus .[root@k8s-master-node1 hyperf]#vim prometheus.yml #改动- job_name: 'prometheus'    static_configs:    - targets: ['localhost:9090']  - job_name: 'node'    static_configs:    - targets: ['node:9100']  - job_name: 'alertmanager'    static_configs:    - targets: ['alertmanager:9093']  - job_name: 'node-exporter'    static_configs:    - targets: ['node:9100']2.2.5 编排部署 Prometheus编写 docker-compose.yaml 文件,使用镜像 exporter、alert、grafana 和 prometheus 完成监控系统的编排部署。[root@k8s-master-node1 hyperf]#vim docker-compose.yaml编排部署prometheusversion: '3'services:  node:    container_name: monitor-node    image: monitor-exporter:v1.0    restart: always    hostname: node    ports:      - 9100:9100  alertmanager:    container_name: monitor-alertmanager    image: monitor-alert:v1.0    depends_on:      - node    restart: always    hostname: alertmanager    links:      - node    ports:      - 9093:9093      - 9094:9094  grafana:    container_name: monitor-grafana    image: monitor-grafana:v1.0    restart: always    depends_on:      - node      - alertmanager    hostname: grafana    links:      - node      - alertmanager    ports:      - 3000:3000  prometheus:    container_name: monitor-prometheus    image: monitor-prometheus:v1.0    restart: always    depends_on:      - node      - alertmanager      - grafana    hostname: prometheus    links:      - node      - alertmanager      - grafana    ports:      - 9090:9090[root@k8s-master-node1 ~]#docker-compose up -d 2.2.6 安装 Jenkins将 Jenkins 部署到 default 命名空间下。要求完成离线插件的安装,设置 Jenkins 的登录信息和授权策略。上传BlueOcean.tar.gz包[root@k8s-master-node1 ~]#tar -zxvf BlueOcean.tar.gz[root@k8s-master-node1 ~]#cd BlueOcean/images/[root@k8s-master-node1 images]# docker load -i java_8-jre.tar[root@k8s-master-node1 images]# docker load -i jenkins_jenkins_latest.tar[root@k8s-master-node1 images]# docker load -i gitlab_gitlab-ce_latest.tar[root@k8s-master-node1 images]# docker load -i maven_latest.tar[root@k8s-master-node1 images]# docker tag maven:latest  192.168.59.200/library/maven[root@k8s-master-node1 images]# docker login 192.168.59.200Username: adminPassword: (Harbor12345)WARNING! Your password will be stored unencrypted in /root/.docker/config.json.Configure a credential helper to remove this warning. Seehttps://docs.docker.com/engine/reference/commandline/login/#credentials-store[root@k8s-master-node1 images]# docker push  192.168.59.200/library/maven#安装Jenkins[root@k8s-master-node1 BlueOcean]# kubectl create ns devops[root@k8s-master-node1 BlueOcean]# kubectl create deployment jenkins -n devops --image=jenkins/jenkins:latest --port 8080 --dry-run -o yaml > jenkins.yaml[root@k8s-master-node1 BlueOcean]# vim jenkins.yaml # 进入添加apiVersion: apps/v1kind: Deploymentmetadata:  creationTimestamp: null  labels:    app: jenkins  name: jenkins  namespace: devopsspec:  replicas: 1  selector:    matchLabels:      app: jenkins  strategy: {}  template:    metadata:      creationTimestamp: null      labels:        app: jenkins    spec:      nodeName: k8s-master-node1      containers:      - image: jenkins/jenkins:latest        imagePullPolicy: IfNotPresent        name: jenkins        ports:        - containerPort: 8080          name: jenkins8080        securityContext:          runAsUser: 0          privileged: true        volumeMounts:        - name: jenkins-home          mountPath: /home/jenkins_home/        - name: docker-home          mountPath: /run/docker.sock        - name: docker          mountPath: /usr/bin/docker        - name: kubectl          mountPath: /usr/bin/kubectl        - name: kube          mountPath: /root/.kube      volumes:      - name: jenkins-home        hostPath:          path: /home/jenkins_home/      - name: docker-home        hostPath:          path: /run/docker.sock      - name: docker        hostPath:          path: /usr/bin/docker      - name: kubectl        hostPath:          path: /usr/bin/kubectl      - name: kube        hostPath:          path: /root/.kube[root@k8s-master-node1 BlueOcean]# kubectl apply -f jenkins.yamldeployment.apps/jenkins created[root@k8s-master-node1 ~]# kubectl get pod -n devops NAME                      READY   STATUS    RESTARTS   AGEjenkins-7d4f5696b7-hqw9d   1/1     Running   0          88s# 进入jenkins,确定docker和kubectl成功安装[root@k8s-master-node1 ~]# kubectl exec -it -n devops jenkins-7d4f5696b7-hqw9d bash[root@k8s-master-node1 BlueOcean]# kubectl expose deployment jenkins -n devops --port=8080 --target-port=30880 --dry-run -o yaml >> jenkins.yaml[root@k8s-master-node1 BlueOcean]# vim jenkins.yaml # 进入修改第二次粘贴在第一此的后面apiVersion: v1kind: Servicemetadata:  creationTimestamp: null  labels:    app: jenkins  name: jenkins  namespace: devopsspec:  ports:  - port: 8080    protocol: TCP    name: jenkins8080    nodePort: 30880  - name: jenkins    port: 50000    nodePort: 30850  selector:    app: jenkins  type: NodePort[root@k8s-master-node1 BlueOcean]# kubectl apply -f jenkins.yamlservice/jenkins created[root@k8s-master-node1 ~]# kubectl get -n devops svcNAME      TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGEjenkins   NodePort   10.96.53.170   <none>        8080:30880/TCP   10s# 使用提供的软件包完成Blue Ocean等离线插件的安装[root@k8s-master-node1 BlueOcean]# kubectl -n devops cp plugins/ jenkins-7d4f5696b7-hqw9d:/var/jenkins_home/* *访问 ip:30880 进入jenkins*# 查看密码[root@k8s-master-node1 BlueOcean]# kubectl -n devops exec jenkins-7d4f5696b7-hqw9d --cat /var/jenkins_home/secrets/initialAdminPassword    2.2.7 安装 GitLab 将 GitLab 部署到 default 命名空间下,要求设置 root 用户密码,新建公开项 目,并将提供的代码上传到该项目。[root@k8s-master-node1 BlueOcean]# kubectl create deployment gitlab -n devops --image=gitlab/gitlab-ce:latest --port 80 --dry-run -o yaml > gitlab.yamlW0222 12:00:34.346609   25564 helpers.go:555] --dry-run is deprecated and can be replaced with --dry-run=client.[root@k8s-master-node1 BlueOcean]# vim gitlab.yamljitlab的配置文件apiVersion: apps/v1kind: Deploymentmetadata:  creationTimestamp: null  labels:    app: gitlab  name: gitlab  namespace: devopsspec:  replicas: 1  selector:    matchLabels:      app: gitlab  strategy: {}  template:    metadata:      creationTimestamp: null      labels:        app: gitlab    spec:      containers:      - image: gitlab/gitlab-ce:latest        imagePullPolicy: IfNotPresent        name: gitlab-ce        ports:        - containerPort: 80        env:        - name: GITLAB_ROOT_PASSWORD          value: admin@123[root@k8s-master-node1 BlueOcean]# kubectl apply -f gitlab.yamldeployment.apps/gitlab created[root@k8s-master-node1 BlueOcean]# kubectl  get pod -n devopsNAME                      READY   STATUS    RESTARTS      AGEgitlab-5b47c8d994-8s9qb   1/1     Running   0             17sjenkins-bbf477c4f-55vgj   1/1     Running   2 (15m ago)   34m[root@k8s-master-node1 BlueOcean]# kubectl expose deployment gitlab -n devops --port=80 --target-port=30888 --dry-run=client -o yaml >> gitlab.yaml[root@k8s-master-node1 BlueOcean]# vim gitlab.yaml # 进入添加---apiVersion: v1kind: Servicemetadata:  creationTimestamp: null  labels:    app: gitlab  name: gitlab  namespace: devopsspec:  ports:  - port: 80    nodePort: 30888  selector:    app: gitlab  type: NodePort[root@k8s-master-node1 BlueOcean]# kubectl apply -f gitlab.yamldeployment.apps/gitlab configuredservice/gitlab created[root@k8s-master-node1 BlueOcean]# kubectl get svc -n devopsNAME      TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGEgitlab    NodePort   10.96.149.160   <none>        80:30888/TCP     6sjenkins   NodePort   10.96.174.123   <none>        8080:30880/TCP   8m7s# 等待gitlab启动,访问IP:30888  root , admin@123 登录 Gitlab* # 将springcloud文件夹中的代码上传到该项目,Gitlab提供了代码示例[root@k8s-master-node1 BlueOcean]# cd springcloud/[root@k8s-master-node1 springcloud]# git config --global user.name "Administrator"[root@k8s-master-node1 springcloud]# git config --global user.email "admin@example.com"[root@k8s-master-node1 springcloud]# git remote remove origin[root@k8s-master-node1 springcloud]# git remote add origin  http://192.168.100.23:30888/root/springcloud.git[root@k8s-master-node1 springcloud]# git add .[root@k8s-master-node1 springcloud]# git commit -m "Initial commit"# On branch masternothing to commit, working directory clean[root@k8s-master-node1 springcloud]# git push -u origin masterUsername for 'http://192.168.100.23:30888': root Password for 'http://root@192.168.100.23:30888':(admin@123)Counting objects: 3192, done.Delta compression using up to 4 threads.Compressing objects: 100% (1428/1428), done.Writing objects: 100% (3192/3192), 1.40 MiB | 0 bytes/s, done.Total 3192 (delta 1233), reused 3010 (delta 1207)remote: Resolving deltas: 100% (1233/1233), done.To http://192.168.100.23:30888/root/springcloud.git * [new branch]      master -> masterBranch master set up to track remote branch master from origin. 2.2.8 配置 Jenkins 与 GitLab 集成在 Jenkins 中新建流水线任务,配置 GitLab 连接 Jenkins,并完成 WebHook 的配置。 * 在 GitLab 中生成名为 jenkins 的“Access Tokens” * 返回 jenkins   * 回到 Gitlab ,复制 token * 复制后填写到此    2.2.9 构建 CI/CD 环境在流水线任务中编写流水线脚本,完成后触发构建,要求基于 GitLab 中的 项目自动完成代码编译、镜像构建与推送、并自动发布服务到 Kubernetes 集群 中。# 创建命名空间[root@k8s-master-node1 ~]# kubectl create ns springcloud* *新建流水线*    * *添加 Gitlab 用户密码*  * Harbor 仓库创建公开项目 springcloud * *返回 Gitlab 准备编写流水线* # 添加映射[root@k8s-master-node1 ~]# cat /etc/hosts192.168.59.200 apiserver.cluster.local # 选择这一行# 进入jenkins 添加映射[root@k8s-master-node1 ~]# kubectl exec -it -n devops jenkins-bbf477c4f-55vgj bashkubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.root@jenkins-bbf477c4f-55vgj:/# echo "192.168.200.59 apiserver.cluster.local" >> /etc/hostsroot@jenkins-bbf477c4f-55vgj:/# cat /etc/hosts # 查看是否成功 # 编写流水线pipeline{    agent none    stages{        stage('mvn-build'){            agent{                docker{                    image '192.168.3.10/library/maven'                    args '-v /root/.m2:/root/.m2'                }            }            steps{                sh 'cp -rvf /opt/repository /root/.m2'                sh 'mvn package -DskipTests'            }        }        stage('image-build'){            agent any            steps{                sh 'cd gateway && docker build -t 192.168.3.10/springcloud/gateway -f Dockerfile .'                sh 'cd config && docker build -t 192.168.3.10/springcloud/config -f Dockerfile .'                sh 'docker login 192.168.3.10 -u=admin -p=Harbor12345'                sh 'docker push 192.168.3.10/springcloud/gateway'                sh 'docker push 192.168.3.10/springcloud/config'            }        }        stage('cloud-deployment'){            agent any            steps{                sh 'sed -i "s/sqshq\\/piggymetrics-gateway/192.168.3.10\\/springcloud\\/gateway/g" yaml/deployment/gateway-deployment.yaml'                sh 'sed -i "s/sqshq\\/piggymetrics-config/192.168.3.10\\/springcloud\\/config/g" yaml/deployment/config-deployment.yaml'                sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/deployment/gateway-deployment.yaml'                sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/deployment/config-deployment.yaml'                sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/svc/gateway-svc.yaml'                sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/svc/config-svc.yaml'            }        }    }}stages:代表整个流水线的所有执行阶段,通常stages只有1个,里面包含多个stage。stage:代表流水线中的某个阶段,可能出现n个。一般分为拉取代码,编译构建,部署等阶段。steps:代表一个阶段内需要执行的逻辑。steps里面是shell脚本,git拉取代码,ssh远程发布等任意内容。* *保存流水线文件,配置Webhook触发构建*  * *取消勾选 SSL 选择, Add webhook 创建*![](vx_images/545790416256726.png =900x) * 创建成功进行测试,成功后返回 jenkins 会发现流水线已经开始自动构建 * 流水线执行成功  # 流水线构建的项目全部运行[root@k8s-master-node1 ~]# kubectl get pod -n springcloudNAME                       READY   STATUS    RESTARTS      AGEconfig-77c74dd878-8kl4x    1/1     Running   0             28sgateway-5b46966894-twv5k   1/1     Running   1 (19s ago)   28s[root@k8s-master-node1 ~]# kubectl -n springcloud get serviceNAME      TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGEconfig    NodePort   10.96.137.40   <none>        8888:30015/TCP   4m3sgateway   NodePort   10.96.121.82   <none>        4000:30010/TCP   4m4s* *等待 PIg 微服务启动,访问 ip:30010 查看构建成功*2.2.10 服务网格:创建 Ingress Gateway将 Bookinfo 应用部署到 default 命名空间下,请为 Bookinfo 应用创建一个网 关,使外部可以访问 Bookinfo 应用。上传ServiceMesh.tar.gz包[root@k8s-master-node1 ~]# tar -zxvf ServiceMesh.tar.gz[root@k8s-master-node1 ~]# cd ServiceMesh/images/[root@k8s-master-node1 images]# docker load -i image.tar部署Bookinfo应用到kubernetes集群:[root@k8s-master-node1 images]# cd /root/ServiceMesh/[root@k8s-master-node1 ServiceMesh]# kubectl apply -f bookinfo/bookinfo.yamlservice/details createdserviceaccount/bookinfo-details createddeployment.apps/details-v1 createdservice/ratings createdserviceaccount/bookinfo-ratings createddeployment.apps/ratings-v1 createdservice/reviews createdserviceaccount/bookinfo-reviews createddeployment.apps/reviews-v1 createdservice/productpage createdserviceaccount/bookinfo-productpage createddeployment.apps/productpage-v1 created[root@k8s-master-node1 ServiceMesh]# kubectl get podNAME                              READY   STATUS    RESTARTS   AGEdetails-v1-79f774bdb9-kndm9       1/1     Running   0          7sproductpage-v1-6b746f74dc-bswbx   1/1     Running   0          7sratings-v1-b6994bb9-6hqfn         1/1     Running   0          7sreviews-v1-545db77b95-j72x5       1/1     Running   0          7s[root@k8s-master-node1 ServiceMesh]# vim bookinfo-gateway.yamlapiVersion: networking.istio.io/v1alpha3kind: Gatewaymetadata:  name: bookinfo-gatewayspec:  selector:    istio: ingressgateway  servers:  - port:      number: 80      name: http      protocol: HTTP    hosts:    - "*"---apiVersion: networking.istio.io/v1alpha3kind: VirtualServicemetadata:  name: bookinfospec:  hosts:  - "*"    gateways:  - bookinfo-gateway  http:  - match:    - uri:        exact: /productpage    - uri:        prefix: /static    - uri:        exact: /login    - uri:        exact: /logout    - uri:        prefix: /api/v1/products    route: # 定义路由转发目的地列表    - destination:        host: productpage        port:          number: 9080[root@k8s-master-node1 ServiceMesh]# kubectl apply -f bookinfo-gateway.yamlgateway.networking.istio.io/bookinfo-gateway createdvirtualservice.networking.istio.io/bookinfo created[root@k8s-master-node1 ServiceMesh]#kubectl get VirtualService bookinfo -o yamlbookinfo-gateway || exact: /productpage || destination || host: productpage || number: 9080[root@k8s-master-node1 ServiceMesh]#kubectl get gateway bookinfo-gateway -o yamlistio: ingressgateway2.2.11 KubeVirt 运维:创建 VM使用提供的镜像在 kubevirt 命名空间下创建一台 VM,名称为 exam,指定 VM 的内存、CPU、网卡和磁盘等配置。[root@k8s-master-node1 ~]# kubectl explain kubevirt.spec. --recursive |grep use         useEmulation   <boolean>[root@k8s-master-node1 ~]# kubectl -n kubevirt edit kubevirtspec:  certificateRotateStrategy: {}  configuration:    developerConfiguration: #{}      useEmulation: true[root@k8s-master-node1 ~]# vim vm.yamlapiVersion: kubevirt.io/v1kind: VirtualMachinemetadata:  name: examspec:  running: true  template:    spec:      domain:        devices:          disks:            - name: vm              disk: {}        resources:          requests:            memory: 1Gi      volumes:        - name: vm          containerDisk:            image: fedora-virt:v1.0            imagePullPolicy: IfNotPresent[root@k8s-master-node1 ~]# kubectl apply -f vm.yamlvirtualmachine.kubevirt.io/exam created[root@k8s-master-node1 ~]# kubectl get virtualmachineNAME        AGE   STATUS    READY exam   31s   Running   True[root@k8s-master-node1 ~]# kubectl delete -f vm.yamlvirtualmachine.kubevirt.io "exam" deleted 
  • [技术干货] 一键部署k8s最终版
    1. 控制节点主机名为 controller,设置计算节点主机名为 compute;[root@controller ~]# hostnamectl set-hostname controller && su[root@compute ~]# hostnamectl set-hostname compute && su2.hosts 文件将 IP 地址映射为主机名。[root@controller&&compute ~]#vi /etc/hosts192.168.100.10 controller192.168.100.20 compute[root@controller&&compute ~]#vi /etc/selinux/config更改SELINUX=disabled[root@controller&&compute ~]#setenforce 0[root@controller&&compute ~]#systemctl stop firewalld && systemctl disable firewalld3.配置 yum 源[root@controller ~]rm -rf /etc/yum.repos.d/*[root@controller&&compute ~]# vi /etc/yum.repos.d/http.repo[centos]name=centosbaseurl=http://192.168.133.130/centosgpgcheck=0enabled=1 [openstack]name=openstackbaseurl=http://192.168.133.130/openstack/iaas-repogpgcheck=0enabled=1[root@controller&&compute ~]#yum clean all && yum repolist && yum makecache 2.1.1 部署容器云平台使用 OpenStack 私有云平台创建两台云主机,分别作为 Kubernetes 集群的 master 节点和 node 节点,然后完成 Kubernetes 集群的部署,并完成 Istio 服务网 格、KubeVirt 虚拟化和 Harbor 镜像仓库的部署。创建俩台云主机并配网# Kubernetes 集群的部署[root@localhost ~]# mount -o loop chinaskills_cloud_paas_v2.0.2.iso /mnt/[root@localhost ~]# cp -rfv /mnt/* /opt/[root@localhost ~]# umount /mnt/[root@master ~]# hostnamectl set-hostname master && su[root@worker ~]# hostnamectl set-hostname worker && su# 安装kubeeasy[root@master ~]# mv /opt/kubeeasy /usr/bin/kubeeasy# 安装依赖环境[root@master ~]# kubeeasy install depend \--host 192.168.59.200,192.168.59.201 \--user root \--password 000000 \--offline-file /opt/dependencies/base-rpms.tar.gz# 安装k8s[root@master ~]# kubeeasy install k8s \--master 192.168.59.200 \--worker 192.168.59.201 \--user root \--password 000000 \--offline-file /opt/kubernetes.tar.gz# 安装istio网格[root@master ~]# kubeeasy add --istio istio# 安装kubevirt虚拟化[root@master ~]# kubeeasy add --virt kubevirt# 安装harbor仓库[root@master ~]# kubeeasy add --registry harbor[root@k8s-master-node1 ~]# vim pod.yamlapiVersion: v1kind: Podmetadata:  name: examspec:  containers:  - name: exam    image: nginx:latest    imagePullPolicy: IfNotPresent    env:    - name: exam      value: "2022"[root@k8s-master-node1 ~]# kubectl apply -f pod.yaml[root@k8s-master-node1 ~]# kubectl get pod#部署 Istio 服务网格[root@k8s-master-node1 ~]# kubectl create ns examnamespace/exam created[root@k8s-master-node1 ~]# kubectl edit ns exam更改为:  labels:    istio-injection: enabled[root@k8s-master-node1 ~]# kubectl describe ns exam  #查看任务 2 容器云服务运维(15 分)2.2.1 容器化部署 Node-Exporter编写 Dockerfile 文件构建 exporter 镜像,要求基于 centos 完成 Node-Exporter 服务的安装与配置,并设置服务开机自启。上传Hyperf.tar包[root@k8s-master-node1 ~]#tar -zxvf Hyperf.tar.gz[root@k8s-master-node1 ~]#cd hyperf/[root@k8s-master-node1 hyperf]#docker load -i centos_7.9.2009.tar上传node_exporter-1.7.0.linux-amd64.tar包[root@k8s-master-node1 hyperf]#vim Dockerfile-exporterFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD node_exporter-1.7.0.linux-amd64.tar.gz /root/EXPOSE 9100ENTRYPOINT ["./root/node_exporter-1.7.0.linux-amd64/node_exporter"][root@k8s-master-node1 hyperf]#docker build -t monitor-exporter:v1.0 -f Dockerfile-exporter .2.2.2 容器化部署Alertmanager编写 Dockerfile 文件构建 alert 镜像,要求基于 centos:latest 完成 Alertmanager 服务的安装与配置,并设置服务开机自启。上传alertmanager-0.26.0.linux-amd64.tar包[root@k8s-master-node1 hyperf]#vim Dockerfile-alertFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD alertmanager-0.26.0.linux-amd64.tar.gz /root/EXPOSE 9093 9094ENTRYPOINT ["./root/alertmanager-0.26.0.linux-amd64/alertmanager","--config.file","/root/alertmanager-0.26.0.linux-amd64/alertmanager.yml"][root@k8s-master-node1 hyperf]#docker build -t monitor-alert:v1.0 -f Dockerfile-alert .2.2.3 容器化部署 Grafana编写 Dockerfile 文件构建 grafana 镜像,要求基于 centos 完成 Grafana 服务 的安装与配置,并设置服务开机自启。上传grafana-6.4.1.linux-amd64.tar.gz包[root@k8s-master-node1 hyperf]#vim Dockerfile-grafanaFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD grafana-6.4.1.linux-amd64.tar.gz /root/EXPOSE 3000ENTRYPOINT ["./root/grafana-6.4.1/bin/grafana-server","-homepath","/root/grafana-6.4.1/"][root@k8s-master-node1 hyperf]#docker build -t monitor-grafana:v1.0 -f Dockerfile-grafana .[root@k8s-master-node1 hyperf]#docker run -d --name grafana-exam-jiance monitor-grafana:v1.0 && sleep 5 && docker exec grafana-exam-jiance ps -aux && docker rm -f grafana-exam-jiance2.2.4 容器化部署 Prometheus 编写 Dockerfile 文件构建 prometheus 镜像,要求基于 centos 完成 Promethues 服务的安装与配置,并设置服务开机自启。上传prometheus-2.13.0.linux-amd64.tar.gz并解压[root@k8s-master-node1 hyperf]#tar -zxvf prometheus-2.13.0.linux-amd64.tar.gz[root@k8s-master-node1 hyperf]#mv prometheus-2.13.0.linux-amd64/prometheus.yml /root/hyperf && rm -rf prometheus-2.13.0.linux-amd64[root@k8s-master-node1 hyperf]#vim Dockerfile-prometheusFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD prometheus-2.13.0.linux-amd64.tar.gz /root/RUN mkdir -p /data/prometheus/COPY prometheus.yml /data/prometheus/EXPOSE 9090ENTRYPOINT ["./root/prometheus-2.13.0.linux-amd64/prometheus","--config.file","/data/prometheus/prometheus.yml"][root@k8s-master-node1 hyperf]#docker build -t monitor-prometheus:v1.0 -f Dockerfile-prometheus .[root@k8s-master-node1 hyperf]#vim prometheus.yml #改动- job_name: 'prometheus'    static_configs:    - targets: ['localhost:9090']  - job_name: 'node'    static_configs:    - targets: ['node:9100']  - job_name: 'alertmanager'    static_configs:    - targets: ['alertmanager:9093']  - job_name: 'node-exporter'    static_configs:    - targets: ['node:9100']2.2.5 编排部署 Prometheus编写 docker-compose.yaml 文件,使用镜像 exporter、alert、grafana 和 prometheus 完成监控系统的编排部署。[root@k8s-master-node1 hyperf]#vim docker-compose.yaml编排部署prometheusversion: '3'services:  node:    container_name: monitor-node    image: monitor-exporter:v1.0    restart: always    hostname: node    ports:      - 9100:9100  alertmanager:    container_name: monitor-alertmanager    image: monitor-alert:v1.0    depends_on:      - node    restart: always    hostname: alertmanager    links:      - node    ports:      - 9093:9093      - 9094:9094  grafana:    container_name: monitor-grafana    image: monitor-grafana:v1.0    restart: always    depends_on:      - node      - alertmanager    hostname: grafana    links:      - node      - alertmanager    ports:      - 3000:3000  prometheus:    container_name: monitor-prometheus    image: monitor-prometheus:v1.0    restart: always    depends_on:      - node      - alertmanager      - grafana    hostname: prometheus    links:      - node      - alertmanager      - grafana    ports:      - 9090:9090[root@k8s-master-node1 ~]#docker-compose up -d 2.2.6 安装 Jenkins将 Jenkins 部署到 default 命名空间下。要求完成离线插件的安装,设置 Jenkins 的登录信息和授权策略。上传BlueOcean.tar.gz包[root@k8s-master-node1 ~]#tar -zxvf BlueOcean.tar.gz[root@k8s-master-node1 ~]#cd BlueOcean/images/[root@k8s-master-node1 images]# docker load -i java_8-jre.tar[root@k8s-master-node1 images]# docker load -i jenkins_jenkins_latest.tar[root@k8s-master-node1 images]# docker load -i gitlab_gitlab-ce_latest.tar[root@k8s-master-node1 images]# docker load -i maven_latest.tar[root@k8s-master-node1 images]# docker tag maven:latest  192.168.59.200/library/maven[root@k8s-master-node1 images]# docker login 192.168.59.200Username: adminPassword: (Harbor12345)WARNING! Your password will be stored unencrypted in /root/.docker/config.json.Configure a credential helper to remove this warning. Seehttps://docs.docker.com/engine/reference/commandline/login/#credentials-store[root@k8s-master-node1 images]# docker push  192.168.59.200/library/maven#安装Jenkins[root@k8s-master-node1 BlueOcean]# kubectl create ns devops[root@k8s-master-node1 BlueOcean]# kubectl create deployment jenkins -n devops --image=jenkins/jenkins:latest --port 8080 --dry-run -o yaml > jenkins.yaml[root@k8s-master-node1 BlueOcean]# vim jenkins.yaml # 进入添加apiVersion: apps/v1kind: Deploymentmetadata:  creationTimestamp: null  labels:    app: jenkins  name: jenkins  namespace: devopsspec:  replicas: 1  selector:    matchLabels:      app: jenkins  strategy: {}  template:    metadata:      creationTimestamp: null      labels:        app: jenkins    spec:      nodeName: k8s-master-node1      containers:      - image: jenkins/jenkins:latest        imagePullPolicy: IfNotPresent        name: jenkins        ports:        - containerPort: 8080          name: jenkins8080        securityContext:          runAsUser: 0          privileged: true        volumeMounts:        - name: jenkins-home          mountPath: /home/jenkins_home/        - name: docker-home          mountPath: /run/docker.sock        - name: docker          mountPath: /usr/bin/docker        - name: kubectl          mountPath: /usr/bin/kubectl        - name: kube          mountPath: /root/.kube      volumes:      - name: jenkins-home        hostPath:          path: /home/jenkins_home/      - name: docker-home        hostPath:          path: /run/docker.sock      - name: docker        hostPath:          path: /usr/bin/docker      - name: kubectl        hostPath:          path: /usr/bin/kubectl      - name: kube        hostPath:          path: /root/.kube[root@k8s-master-node1 BlueOcean]# kubectl apply -f jenkins.yamldeployment.apps/jenkins created[root@k8s-master-node1 ~]# kubectl get pod -n devops NAME                      READY   STATUS    RESTARTS   AGEjenkins-7d4f5696b7-hqw9d   1/1     Running   0          88s# 进入jenkins,确定docker和kubectl成功安装[root@k8s-master-node1 ~]# kubectl exec -it -n devops jenkins-7d4f5696b7-hqw9d bash[root@k8s-master-node1 BlueOcean]# kubectl expose deployment jenkins -n devops --port=8080 --target-port=30880 --dry-run -o yaml >> jenkins.yaml[root@k8s-master-node1 BlueOcean]# vim jenkins.yaml # 进入修改第二次粘贴在第一此的后面apiVersion: v1kind: Servicemetadata:  creationTimestamp: null  labels:    app: jenkins  name: jenkins  namespace: devopsspec:  ports:  - port: 8080    protocol: TCP    name: jenkins8080    nodePort: 30880  - name: jenkins    port: 50000    nodePort: 30850  selector:    app: jenkins  type: NodePort[root@k8s-master-node1 BlueOcean]# kubectl apply -f jenkins.yamlservice/jenkins created[root@k8s-master-node1 ~]# kubectl get -n devops svcNAME      TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGEjenkins   NodePort   10.96.53.170   <none>        8080:30880/TCP   10s# 使用提供的软件包完成Blue Ocean等离线插件的安装[root@k8s-master-node1 BlueOcean]# kubectl -n devops cp plugins/ jenkins-7d4f5696b7-hqw9d:/var/jenkins_home/* *访问 ip:30880 进入jenkins*# 查看密码[root@k8s-master-node1 BlueOcean]# kubectl -n devops exec jenkins-7d4f5696b7-hqw9d --cat /var/jenkins_home/secrets/initialAdminPassword    2.2.7 安装 GitLab 将 GitLab 部署到 default 命名空间下,要求设置 root 用户密码,新建公开项 目,并将提供的代码上传到该项目。[root@k8s-master-node1 BlueOcean]# kubectl create deployment gitlab -n devops --image=gitlab/gitlab-ce:latest --port 80 --dry-run -o yaml > gitlab.yamlW0222 12:00:34.346609   25564 helpers.go:555] --dry-run is deprecated and can be replaced with --dry-run=client.[root@k8s-master-node1 BlueOcean]# vim gitlab.yamljitlab的配置文件apiVersion: apps/v1kind: Deploymentmetadata:  creationTimestamp: null  labels:    app: gitlab  name: gitlab  namespace: devopsspec:  replicas: 1  selector:    matchLabels:      app: gitlab  strategy: {}  template:    metadata:      creationTimestamp: null      labels:        app: gitlab    spec:      containers:      - image: gitlab/gitlab-ce:latest        imagePullPolicy: IfNotPresent        name: gitlab-ce        ports:        - containerPort: 80        env:        - name: GITLAB_ROOT_PASSWORD          value: admin@123[root@k8s-master-node1 BlueOcean]# kubectl apply -f gitlab.yamldeployment.apps/gitlab created[root@k8s-master-node1 BlueOcean]# kubectl  get pod -n devopsNAME                      READY   STATUS    RESTARTS      AGEgitlab-5b47c8d994-8s9qb   1/1     Running   0             17sjenkins-bbf477c4f-55vgj   1/1     Running   2 (15m ago)   34m[root@k8s-master-node1 BlueOcean]# kubectl expose deployment gitlab -n devops --port=80 --target-port=30888 --dry-run=client -o yaml >> gitlab.yaml[root@k8s-master-node1 BlueOcean]# vim gitlab.yaml # 进入添加---apiVersion: v1kind: Servicemetadata:  creationTimestamp: null  labels:    app: gitlab  name: gitlab  namespace: devopsspec:  ports:  - port: 80    nodePort: 30888  selector:    app: gitlab  type: NodePort[root@k8s-master-node1 BlueOcean]# kubectl apply -f gitlab.yamldeployment.apps/gitlab configuredservice/gitlab created[root@k8s-master-node1 BlueOcean]# kubectl get svc -n devopsNAME      TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGEgitlab    NodePort   10.96.149.160   <none>        80:30888/TCP     6sjenkins   NodePort   10.96.174.123   <none>        8080:30880/TCP   8m7s# 等待gitlab启动,访问IP:30888  root , admin@123 登录 Gitlab* # 将springcloud文件夹中的代码上传到该项目,Gitlab提供了代码示例[root@k8s-master-node1 BlueOcean]# cd springcloud/[root@k8s-master-node1 springcloud]# git config --global user.name "Administrator"[root@k8s-master-node1 springcloud]# git config --global user.email "admin@example.com"[root@k8s-master-node1 springcloud]# git remote remove origin[root@k8s-master-node1 springcloud]# git remote add origin  http://192.168.100.23:30888/root/springcloud.git[root@k8s-master-node1 springcloud]# git add .[root@k8s-master-node1 springcloud]# git commit -m "Initial commit"# On branch masternothing to commit, working directory clean[root@k8s-master-node1 springcloud]# git push -u origin masterUsername for 'http://192.168.100.23:30888': root Password for 'http://root@192.168.100.23:30888':(admin@123)Counting objects: 3192, done.Delta compression using up to 4 threads.Compressing objects: 100% (1428/1428), done.Writing objects: 100% (3192/3192), 1.40 MiB | 0 bytes/s, done.Total 3192 (delta 1233), reused 3010 (delta 1207)remote: Resolving deltas: 100% (1233/1233), done.To http://192.168.100.23:30888/root/springcloud.git * [new branch]      master -> masterBranch master set up to track remote branch master from origin. 2.2.8 配置 Jenkins 与 GitLab 集成在 Jenkins 中新建流水线任务,配置 GitLab 连接 Jenkins,并完成 WebHook 的配置。 * 在 GitLab 中生成名为 jenkins 的“Access Tokens” * 返回 jenkins   * 回到 Gitlab ,复制 token * 复制后填写到此    2.2.9 构建 CI/CD 环境在流水线任务中编写流水线脚本,完成后触发构建,要求基于 GitLab 中的 项目自动完成代码编译、镜像构建与推送、并自动发布服务到 Kubernetes 集群 中。# 创建命名空间[root@k8s-master-node1 ~]# kubectl create ns springcloud* *新建流水线*    * *添加 Gitlab 用户密码*  * Harbor 仓库创建公开项目 springcloud * *返回 Gitlab 准备编写流水线* # 添加映射[root@k8s-master-node1 ~]# cat /etc/hosts192.168.59.200 apiserver.cluster.local # 选择这一行# 进入jenkins 添加映射[root@k8s-master-node1 ~]# kubectl exec -it -n devops jenkins-bbf477c4f-55vgj bashkubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.root@jenkins-bbf477c4f-55vgj:/# echo "192.168.200.59 apiserver.cluster.local" >> /etc/hostsroot@jenkins-bbf477c4f-55vgj:/# cat /etc/hosts # 查看是否成功 # 编写流水线pipeline{    agent none    stages{        stage('mvn-build'){            agent{                docker{                    image '192.168.3.10/library/maven'                    args '-v /root/.m2:/root/.m2'                }            }            steps{                sh 'cp -rvf /opt/repository /root/.m2'                sh 'mvn package -DskipTests'            }        }        stage('image-build'){            agent any            steps{                sh 'cd gateway && docker build -t 192.168.3.10/springcloud/gateway -f Dockerfile .'                sh 'cd config && docker build -t 192.168.3.10/springcloud/config -f Dockerfile .'                sh 'docker login 192.168.3.10 -u=admin -p=Harbor12345'                sh 'docker push 192.168.3.10/springcloud/gateway'                sh 'docker push 192.168.3.10/springcloud/config'            }        }        stage('cloud-deployment'){            agent any            steps{                sh 'sed -i "s/sqshq\\/piggymetrics-gateway/192.168.3.10\\/springcloud\\/gateway/g" yaml/deployment/gateway-deployment.yaml'                sh 'sed -i "s/sqshq\\/piggymetrics-config/192.168.3.10\\/springcloud\\/config/g" yaml/deployment/config-deployment.yaml'                sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/deployment/gateway-deployment.yaml'                sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/deployment/config-deployment.yaml'                sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/svc/gateway-svc.yaml'                sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/svc/config-svc.yaml'            }        }    }}stages:代表整个流水线的所有执行阶段,通常stages只有1个,里面包含多个stage。stage:代表流水线中的某个阶段,可能出现n个。一般分为拉取代码,编译构建,部署等阶段。steps:代表一个阶段内需要执行的逻辑。steps里面是shell脚本,git拉取代码,ssh远程发布等任意内容。* *保存流水线文件,配置Webhook触发构建*  * *取消勾选 SSL 选择, Add webhook 创建*![](vx_images/545790416256726.png =900x) * 创建成功进行测试,成功后返回 jenkins 会发现流水线已经开始自动构建 * 流水线执行成功  # 流水线构建的项目全部运行[root@k8s-master-node1 ~]# kubectl get pod -n springcloudNAME                       READY   STATUS    RESTARTS      AGEconfig-77c74dd878-8kl4x    1/1     Running   0             28sgateway-5b46966894-twv5k   1/1     Running   1 (19s ago)   28s[root@k8s-master-node1 ~]# kubectl -n springcloud get serviceNAME      TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGEconfig    NodePort   10.96.137.40   <none>        8888:30015/TCP   4m3sgateway   NodePort   10.96.121.82   <none>        4000:30010/TCP   4m4s* *等待 PIg 微服务启动,访问 ip:30010 查看构建成功*2.2.10 服务网格:创建 Ingress Gateway将 Bookinfo 应用部署到 default 命名空间下,请为 Bookinfo 应用创建一个网 关,使外部可以访问 Bookinfo 应用。上传ServiceMesh.tar.gz包[root@k8s-master-node1 ~]# tar -zxvf ServiceMesh.tar.gz[root@k8s-master-node1 ~]# cd ServiceMesh/images/[root@k8s-master-node1 images]# docker load -i image.tar部署Bookinfo应用到kubernetes集群:[root@k8s-master-node1 images]# cd /root/ServiceMesh/[root@k8s-master-node1 ServiceMesh]# kubectl apply -f bookinfo/bookinfo.yamlservice/details createdserviceaccount/bookinfo-details createddeployment.apps/details-v1 createdservice/ratings createdserviceaccount/bookinfo-ratings createddeployment.apps/ratings-v1 createdservice/reviews createdserviceaccount/bookinfo-reviews createddeployment.apps/reviews-v1 createdservice/productpage createdserviceaccount/bookinfo-productpage createddeployment.apps/productpage-v1 created[root@k8s-master-node1 ServiceMesh]# kubectl get podNAME                              READY   STATUS    RESTARTS   AGEdetails-v1-79f774bdb9-kndm9       1/1     Running   0          7sproductpage-v1-6b746f74dc-bswbx   1/1     Running   0          7sratings-v1-b6994bb9-6hqfn         1/1     Running   0          7sreviews-v1-545db77b95-j72x5       1/1     Running   0          7s[root@k8s-master-node1 ServiceMesh]# vim bookinfo-gateway.yamlapiVersion: networking.istio.io/v1alpha3kind: Gatewaymetadata:  name: bookinfo-gatewayspec:  selector:    istio: ingressgateway  servers:  - port:      number: 80      name: http      protocol: HTTP    hosts:    - "*"---apiVersion: networking.istio.io/v1alpha3kind: VirtualServicemetadata:  name: bookinfospec:  hosts:  - "*"    gateways:  - bookinfo-gateway  http:  - match:    - uri:        exact: /productpage    - uri:        prefix: /static    - uri:        exact: /login    - uri:        exact: /logout    - uri:        prefix: /api/v1/products    route: # 定义路由转发目的地列表    - destination:        host: productpage        port:          number: 9080[root@k8s-master-node1 ServiceMesh]# kubectl apply -f bookinfo-gateway.yamlgateway.networking.istio.io/bookinfo-gateway createdvirtualservice.networking.istio.io/bookinfo created[root@k8s-master-node1 ServiceMesh]#kubectl get VirtualService bookinfo -o yamlbookinfo-gateway || exact: /productpage || destination || host: productpage || number: 9080[root@k8s-master-node1 ServiceMesh]#kubectl get gateway bookinfo-gateway -o yamlistio: ingressgateway2.2.11 KubeVirt 运维:创建 VM使用提供的镜像在 kubevirt 命名空间下创建一台 VM,名称为 exam,指定 VM 的内存、CPU、网卡和磁盘等配置。[root@k8s-master-node1 ~]# kubectl explain kubevirt.spec. --recursive |grep use         useEmulation   <boolean>[root@k8s-master-node1 ~]# kubectl -n kubevirt edit kubevirtspec:  certificateRotateStrategy: {}  configuration:    developerConfiguration: #{}      useEmulation: true[root@k8s-master-node1 ~]# vim vm.yamlapiVersion: kubevirt.io/v1kind: VirtualMachinemetadata:  name: examspec:  running: true  template:    spec:      domain:        devices:          disks:            - name: vm              disk: {}        resources:          requests:            memory: 1Gi      volumes:        - name: vm          containerDisk:            image: fedora-virt:v1.0            imagePullPolicy: IfNotPresent[root@k8s-master-node1 ~]# kubectl apply -f vm.yamlvirtualmachine.kubevirt.io/exam created[root@k8s-master-node1 ~]# kubectl get virtualmachineNAME        AGE   STATUS    READY exam   31s   Running   True[root@k8s-master-node1 ~]# kubectl delete -f vm.yamlvirtualmachine.kubevirt.io "exam" deleted 
  • [技术干货] 一键部署k8s3
    容器云搭建2.1.1 部署Kubernetes容器云平台使用OpenStack私有云平台创建两台云主机,云主机类型使用4vCPU/12G/100G类型,分别作为Kubernetes集群的Master节点和node节点,然后完成Kubernetes集群部署。2.1.2 部署Harbor镜像仓库在Kubernetes集群中完成Harbor镜像仓库部署。2.1.3 部署Istio服务网格在Kubernetes集群中完成Istio服务网格组件部署。2.1.4 部署kubeVirt 虚拟化组件在Kubernetes集群中完成kubeVirt虚拟化组件部署。mount -o loop chinaskills_cloud_paas_v2.1.iso /mnt/cp -rfv /mnt/* /opt/umount /mnt/ ## 在master节点安装kubeeasy工具:mv /opt/kubeeasy-v2.0 /usr/bin/kubeeasy ## 在master节点安装依赖包:kubeeasy install dependencies \ --host 10.28.0.205,10.28.0.221 \ --user root \ --password Abc@1234 \ --offline-file /opt/dependencies/packages.tar.gz  ## 配置SSH免密:kubeeasy check ssh \ --host 10.28.0.205,10.28.0.221 \ --user root \ --password Abc@1234 kubeeasy create ssh-keygen \  --master 10.28.2.191 \  --worker 10.28.0.198 \  --user root \  --password Abc@1234 ## master节点部署kuberneteskubeeasy install kubernetes \  --master 10.24.2.10 \  --worker 10.24.2.20,10.24.2.30,10.24.2.40 \  --user root \  --password 000000 \  --version 1.25.2 \  --offline-file /opt/kubeeasy.tar.gz容器云服务运维:2.2.1 容器化部署Node-Exporter编写Dockerfile文件构建exporter镜像,要求基于centos完成Node-Exporter服务的安装与配置,并设置服务开机自启。编写Dockerfile构建monitor-exporter:v1.0镜像,具体要求如下:(需要用到的软件包:Monitor.tar.gz)(1)基础镜像:centos:centos7.9.2009;(2)使用二进制包node_exporter-0.18.1.linux-amd64.tar.gz安装node-exporter服务;(3)声明端口:9100;(4)设置服务开机自启。tar -zxvf Monitor.tar.gz docker load -i Monitor/CentOS_7.9.2009.tar cd Monitor/##编写Dockerfile文件vim Dockerfile-exporterFROM centos:centos7.9.2009RUN rm -rf /etc/yum.repos.d/*ADD node_exporter-0.18.1.linux-amd64.tar.gz /root/EXPOSE 9100ENTRYPOINT ["./root/node_exporter-0.18.1.linux-amd64/node_exporter"] ##运行脚本docker build -t monitor-exporter:v1.0 -f Dockerfile-exporter .2.2.2容器化部署Alertmanager编写Dockerfile文件构建alert镜像,要求基于centos:latest完成Alertmanager服务的安装与配置,并设置服务开机自启。编写Dockerfile构建monitor-alert:v1.0镜像,具体要求如下:(需要用到的软件包:Monitor.tar.gz)(1)基础镜像:centos:centos7.9.2009;(2)使用二进制包alertmanager-0.19.0.linux-amd64.tar.gz安装Alertmanager服务;(3)声明端口:9093、9094;(4)设置服务开机自启。tar -zxvf Monitor.tar.gz docker load -i Monitor/CentOS_7.9.2009.tar cd Monitor/ ##编写Dockerfile文件vim Dockerfile-alertFROM centos:centos7.9.2009RUN rm -rf /etc/yum.repos.d/*ADD alertmanager-0.19.0.linux-amd64.tar.gz /root/EXPOSE 9093 9094ENTRYPOINT ["./root/alertmanager-0.19.0.linux-amd64/alertmanager","--config.file","/root/alertmanager-0.19.0.linux-amd64/alertmanager.yml"] ##运行脚本docker build -t monitor-alert:v1.0 -f Dockerfile-alert .2.2.3 容器化部署Grafana编写Dockerfile文件构建grafana镜像,要求基于centos完成Grafana服务的安装与配置,并设置服务开机自启。编写Dockerfile构建monitor-grafana:v1.0镜像,具体要求如下:(需要用到的软件包:Monitor.tar.gz)(1)基础镜像:centos:centos7.9.2009;(2)使用二进制包grafana-6.4.1.linux-amd64.tar.gz安装grafana服务;(3)声明端口:3000;(4)设置nacos服务开机自启。tar -zxvf Monitor.tar.gz docker load -i Monitor/CentOS_7.9.2009.tar cd Monitor/ ##编写Dockerfile文件vim Dockerfile-grafanaFROM centos:centos7.9.2009RUN rm -rf /etc/yum.repos.d/*ADD grafana-6.4.1.linux-amd64.tar.gz /root/EXPOSE 3000ENTRYPOINT ["./root/grafana-6.4.1/bin/grafana-server","-homepath","/root/grafana-6.4.1/"] ##运行脚本docker build -t monitor-grafana:v1.0 -f Dockerfile-grafana .2.2.4 容器化部署Prometheus编写Dockerfile文件构建prometheus镜像,要求基于centos完成Promethues服务的安装与配置,并设置服务开机自启。编写Dockerfile构建monitor-prometheus:v1.0镜像,具体要求如下:(需要用到的软件包:Monitor.tar.gz)(1)基础镜像:centos:centos7.9.2009;(2)使用二进制包prometheus-2.13.0.linux-amd64.tar.gz安装promethues服务;(3)编辑/data/prometheus/prometheus.yml文件,创建3个任务模板:prometheus、node和alertmanager,并将该文件拷贝到/data/prometheus/目录下;(4)声明端口:9090;(5)设置服务开机自启。编写Dockerfile文件FROM centos:centos7.9.2009RUN rm -rf /etc/yum.repos.d/*ADD prometheus-2.13.0.linux-amd64.tar.gz /root/RUN mkdir -p /data/prometheusEXPOSE 9090RUN cat <<EOF > /data/prometheus/prometheus.ymlglobal:  scrape_interval: 15s scrape_configs:- job_name: prometheus  static_configs:  - targets: ['localhost:9090']- job_name: node  static_configs:  - targets: ['localhost:9090']- job_name: alertmanager  static_configs:  - targets: ['localhost:9090']- job_name: grafana:  static_configs:  - targets: ['localhost:9090']    EOF ENTRYPOINT ["./root/prometheus-2.13.0.linux-amd64/prometheus","--config.file","/data/prometheus/prometheus.yml"]上面cat写入了 下面的prometheus.yml就不用再写了编写prometheus.yml (如果写了下面的文件 需要在Dockerfile中COPY文件到/data/prometheus/)[root@master Monitor]# vim prometheus.ymlglobal:  scrape_interval:     15s   evaluation_interval: 15s  alerting:  alertmanagers:  - static_configs:    - targets:      - alertmanager: 9093 rule_files: scrape_configs:  - job_name: 'prometheus'    static_configs:    - targets: ['localhost:9090']   - job_name: 'node'    static_configs:    - targets: ['node:9100']   - job_name: 'alertmanager'    static_configs:    - targets: ['alertmanager:9093']  - job_name: 'node-exporter'    static_configs:    - targets: ['node:9100']跑脚本docker build -t monitor-prometheus:v1.0 -f Dockerfile-prometheus .2.2.5 编排部署监控系统编写docker-compose.yaml文件,使用镜像exporter、alert、grafana和prometheus完成监控系统的编排部署。编写docker-compose.yaml文件,具体要求如下:(1)容器1名称:monitor-node;镜像:monitor-exporter:v1.0;端口映射:9100:9100;(2)容器2名称:monitor- alertmanager;镜像:monitor-alert:v1.0;端口映射:9093:9093、9094:9094;(3)容器3名称:monitor-grafana;镜像:monitor-grafana:v1.0;端口映射:3000:3000;(4)容器4名称:monitor-prometheus;镜像:monitor-prometheus:v1.0;端口映射:9090:9090。完成后编排部署监控系统,将Prometheus设置为Grafana的数据源,并命名为Prometheus。(5)添加元数据 进入grafana的网页 添加prometheus为数据源编写docker-compose.yaml文件version: '3'services:# 容器1:用于监控节点的exporter服务  monitor-node:    image: monitor-exporter:v1.0    ports:      - "9100:9100"         # 容器2:alertmanager服务  monitor-alertmanager:    image: monitor-alert:v1.0    ports:      - "9093:9093"      - "9094:9094"   # 容器3:grafana服务  monitor-grafana:    image: monitor-grafana:v1.0    ports:      - "3000:3000"  # 容器4:prometheus服务  monitor-prometheus:    image: monitor-prometheus:v1.0    ports:      - "9090:9090"有依赖关系的写法;version: '3'services:  node:    container_name: monitor-node    image: monitor-exporter:v1.0    restart: always    hostname: node    ports:      - 9100:9100  alertmanager:    container_name: monitor-alertmanager    image: monitor-alert:v1.0    depends_on:      - node    restart: always    hostname: alertmanager    links:      - node    ports:      - 9093:9093      - 9094:9094  grafana:    container_name: monitor-grafana    image: monitor-grafana:v1.0    depends_on:      - node      - alertmanager    hostname: grafana    restart: always    links:      - node      - alertmanager    ports:      - 3000:3000  prometheus:    container_name: monitor-prometheus    image: monitor-prometheus:v1.0    depends_on:      - node      - alertmanager      - grafana    hostname: prometheus    restart: always    links:      - node      - alertmanager      - grafana    ports:      - 9090:9090查看pod状态[root@master Monitor]# docker ps -aCONTAINER ID   IMAGE                     COMMAND                  CREATED         STATUS         PORTS                                                           NAMESe4a643469259   monitor-prometheus:v1.0   "./root/prometheus-2…"   2 minutes ago   Up 2 minutes   0.0.0.0:9090->9090/tcp, :::9090->9090/tcp                       monitor-prometheuscd1eddaba0d3   monitor-grafana:v1.0      "./root/grafana-6.4.…"   2 minutes ago   Up 2 minutes   0.0.0.0:3000->3000/tcp, :::3000->3000/tcp                       monitor-grafana9032755f8e18   monitor-alert:v1.0        "./root/alertmanager…"   2 minutes ago   Up 2 minutes   0.0.0.0:9093-9094->9093-9094/tcp, :::9093-9094->9093-9094/tcp   monitor-alertmanagere3ae4d3bf8f9   monitor-exporter:v1.0     "./root/node_exporte…"   2 minutes ago   Up 2 minutes   0.0.0.0:9100->9100/tcp, :::9100->9100/tcp                       monitor-node登录grafana网页http://10.28.0.244:3000 账号admin 密码随便(admin)登录后会提示修改密码 可以跳过 添加prometheus为数据源  输入主节点的ip加端口号http://10.28.0.244:9090(普罗米修斯的端口)然后点击下面绿色的保存 再点back退出 2.2.6 部署GitLab将GitLab部署到Kubernetes集群中,设置GitLab服务root用户的密码,使用Service暴露服务,并将提供的项目包导入到GitLab中。在Kubernetes集群中新建命名空间gitlab-ci,将GitLab部署到该命名空间下,Deployment和Service名称均为gitlab,以NodePort方式将80端口对外暴露为30880,设置GitLab服务root用户的密码为admin@123,将项目包demo-2048.tar.gz导入到GitLab中并命名为demo-2048。需要用到的软件包:CICD-Runners-demo2048.tar.gz解压软件包,导入镜像[root@master ~]# tar -zxvf CICD-Runners-demo2048.tar.gz[root@master ~]# ctr -n k8s.io image import gitlab-ci/images/images.tar[root@master ~]# docker load < gitlab-ci/images/images.tar部署GitLab服务[root@master ~]# kubectl create ns gitlab-ci        ## 新建命名空间 [root@master ~]# cd gitlab-ci[root@master gitlab-ci]# vi gitlab-deploy.yamlapiVersion: apps/v1kind: Deploymentmetadata:  name: gitlab  namespace: gitlab-ci  labels:    name: gitlabspec:  selector:    matchLabels:      name: gitlab  template:    metadata:      name: gitlab      labels:        name: gitlab    spec:      containers:      - name: gitlab        image: gitlab/gitlab-ce:latest        imagePullPolicy: IfNotPresent        env:        - name: GITLAB_ROOT_PASSWORD          value: admin@123        - name: GITLAB_ROOT_EMAIL          value: 123456@qq.com        ports:        - name: http          containerPort: 80        volumeMounts:        - name: gitlab-config          mountPath: /etc/gitlab        - name: gitlab-logs          mountPath: /var/log/gitlab        - name: gitlab-data          mountPath: /var/opt/gitlab      volumes:      - name: gitlab-config        hostPath:          path: /home/gitlab/conf      - name: gitlab-logs        hostPath:          path: /home/gitlab/logs      - name: gitlab-data        hostPath:          path: /home/gitlab/data删除deployment资源的命令kubectl -n gitlab-ci delete -f gitlab-deploy.yaml[root@master gitlab-ci]# vi gitlab-svc.yaml        ## 创建service服务释放端口apiVersion: v1kind: Servicemetadata:  name: gitlab  namespace: gitlab-ci  labels:    name: gitlabspec:  type: NodePort  ports:    - name: http      port: 80      targetPort: http      nodePort: 30880  selector:    name: gitlab [root@master gitlab-ci]# kubectl apply -f gitlab-deploy.yaml [root@master gitlab-ci]# kubectl apply -f gitlab-svc.yaml  ## 查看pod[root@master gitlab-ci]# kubectl -n gitlab-ci get pod NAME                      READY   STATUS    RESTARTS   AGEgitlab-65c6b98f6b-q4dwq   1/1     Running   0          2m3s [root@master gitlab-ci]# kubectl -n gitlab-ci get pods -owide        ## 查看pod详细信息NAME                      READY   STATUS    RESTARTS   AGE     IP             NODE     NOMINATED NODE   READINESS GATESgitlab-65c6b98f6b-q4dwq   1/1     Running   0          2m57s   192.244.0.21   master   <none>           <none>在集群中定义hosts添加gitlabPod的解析[root@master gitlab-ci]# kubectl edit configmap coredns -n kube-system... ...     16            fallthrough in-addr.arpa ip6.arpa     17            ttl 30     18  }     19         hosts {     20             192.244.0.21 gitlab-65c6b98f6b-q4dwq            ## 这里是Pod容器的ip     21             fallthrough     22  }     23         prometheus :9153     24                                                             ## 这里有三行删除     25         cache 30... ... 保存退出  需要保存两遍[root@master gitlab-ci]# kubectl -n kube-system rollout restart deploy coredns    ## 保存刚才的设置进入gitlab Pod中[root@master gitlab-ci]#  kubectl -n gitlab-ci get pods[root@master gitlab-ci]#  kubectl exec -it -n gitlab-ci gitlab-65c6b98f6b-q4dwq bashroot@gitlab-7b54df755-6ljtp:/# vi /etc/gitlab/gitlab.rb external_url 'http://192.244.0.21:80'            ## 再首行添加  这里也是Pod的iproot@gitlab-7b54df755-6ljtp:/# rebootroot@gitlab-7b54df755-6ljtp:/# exit查看service[root@master gitlab-ci]# kubectl -n gitlab-ci get svcNAME     TYPE       CLUSTER-IP        EXTERNAL-IP   PORT(S)        AGEgitlab   NodePort   192.102.225.126   <none>        80:30880/TCP   18m访问主机IPhttp://10.28.3.102:30880用户:123456@qq.com 密码:admin@123 点击 “Create a project” 点击“Create biank project” 创建项目demo-2048 可见等级选Public 填好后 点“Create project” 进入项目 将代码推送到项目中[root@master gitlab-ci]# cd /root/gitlab-ci/demo-2048[root@master demo-2048]# git config --global user.name "Administrator"     ## 这里的用户密码[root@master demo-2048]# git config --global user.email "123456@qq.com"     ## 是用于下载时候登录的[root@master demo-2048]# git remote remove origin        ## 删除原有库[root@master demo-2048]# git remote add origin http://10.28.0.95:30880/root/demo-2048.git ## 添加库主节点IP[root@master demo-2048]# git add .[root@master demo-2048]# git commit -m "initial commit"[root@master demo-2048]# git push -u origin droneUsername for 'http://10.28.0.198:30880': root        Password for 'http://root@10.28.0.198:30880': admin@123         ## 这是deployment资源文件中设置的推送完刷新 项目库 2.2.7 部署GitLab Runner将GitLab Runner部署到Kubernetes集群中,为GitLab Runner创建持久化构建缓存目录以加速构建速度,并将其注册到GitLab中。将GitLab Runner部署到gitlab-ci命名空间下,Release名称为gitlab-runner,为GitLab Runner创建持久化构建缓存目录/home/gitlab-runner/ci-build-cache以加速构建速度,并将其注册到GitLab中。登录GitLab管理界面(http://10.24.2.14:30880/admin),然后点击左侧菜单栏中的CI/CD下的Runners 记住复制的token:DN3ZZDAGSGB-kWSb-qBT创建Service服务[root@master ~]# cd /root/gitlab-ci/[root@master gitlab-ci]# cat runner-sa.yaml apiVersion: v1kind: ServiceAccountmetadata:  name: gitlab-ci  namespace: gitlab-ci创建角色[root@master gitlab-ci]# cat runner-role.yaml kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata:  name: gitlab-ci  namespace: gitlab-cirules:  - apiGroups: [""]    resources: ["*"]    verbs: ["*"]创建角色对接[root@master gitlab-ci]# cat runner-rb.yaml kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:  name: gitlab-ci  namespace: gitlab-cisubjects:  - kind: ServiceAccount    name: gitlab-ci    namespace: gitlab-ciroleRef:  kind: Role  name: gitlab-ci  apiGroup: rbac.authorization.k8s.io创建资源对象[root@master gitlab-ci]# kubectl apply -f runner-sa.yaml [root@master gitlab-ci]# kubectl apply -f runner-role.yaml [root@master gitlab-ci]# kubectl apply -f runner-rb.yaml apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  name: default  labels:    k8s-app: gitlab-defaultroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: cluster-adminsubjects:  - kind: ServiceAccount    name: default    namespace: gitlab-ci————————————————                            版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。                        原文链接:https://blog.csdn.net/qq_36416567/article/details/144212465
  • [技术干货] 一键部署k8s2
    # 查看密码[root@k8s-master-node1 BlueOcean]# kubectl -n devops exec jenkins-7d4f5696b7-hqw9d --cat /var/jenkins_home/secrets/initialAdminPassword32c47352c469a4ef58e8a797226949e88 * *前面安装了离线插件,所以这里需要重启 jenkins ,地址栏加入 restart 完成重启*   2.2.7 安装 GitLab将 GitLab 部署到 default 命名空间下,要求设置 root 用户密码,新建公开项 目,并将提供的代码上传到该项目。[root@k8s-master-node1 BlueOcean]# kubectl create deployment gitlab -n devops --image=gitlab/gitlab-ce:latest --port 80 --dry-run -o yaml > gitlab.yamlW0222 12:00:34.346609   25564 helpers.go:555] --dry-run is deprecated and can be replaced with --dry-run=client.[root@k8s-master-node1 BlueOcean]# vim gitlab.yamlapiVersion: apps/v1kind: Deploymentmetadata:  creationTimestamp: null  labels:    app: gitlab  name: gitlab  namespace: devopsspec:  replicas: 1  selector:    matchLabels:      app: gitlab  #strategy: {}  template:    metadata:      #creationTimestamp: null      labels:        app: gitlab    spec:      containers:      - image: gitlab/gitlab-ce:latest         imagePullPolicy: IfNotPresent        name: gitlab-ce        ports:        - containerPort: 80        env:        - name: GITLAB_ROOT_PASSWORD          value: admin@123[root@k8s-master-node1 BlueOcean]# kubectl apply -f gitlab.yamldeployment.apps/gitlab created[root@k8s-master-node1 BlueOcean]# kubectl  get pod -n devopsNAME                      READY   STATUS    RESTARTS      AGEgitlab-5b47c8d994-8s9qb   1/1     Running   0             17sjenkins-bbf477c4f-55vgj   1/1     Running   2 (15m ago)   34m[root@k8s-master-node1 BlueOcean]# kubectl expose deployment gitlab -n devops --port=80 --target-port=30888 --dry-run=client -o yaml >> gitlab.yaml[root@k8s-master-node1 BlueOcean]# vim gitlab.yaml # 进入添加---apiVersion: v1kind: Servicemetadata:  creationTimestamp: null  labels:    app: gitlab  name: gitlab  namespace: devopsspec:  ports:  - port: 80    nodePort: 30888  selector:    app: gitlab  type: NodePort[root@k8s-master-node1 BlueOcean]# kubectl apply -f gitlab.yaml deployment.apps/gitlab configuredservice/gitlab created[root@k8s-master-node1 BlueOcean]# kubectl get svc -n devops NAME      TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGEgitlab    NodePort   10.96.149.160   <none>        80:30888/TCP     6sjenkins   NodePort   10.96.174.123   <none>        8080:30880/TCP   8m7s# 等待gitlab启动,访问IP:30888  root , admin@123 登录 Gitlab* # 将springcloud文件夹中的代码上传到该项目,Gitlab提供了代码示例[root@k8s-master-node1 BlueOcean]# cd springcloud/[root@k8s-master-node1 springcloud]# git config --global user.name "Administrator"[root@k8s-master-node1 springcloud]# git config --global user.email "admin@example.com"[root@k8s-master-node1 springcloud]# git remote remove origin[root@k8s-master-node1 springcloud]# git remote add origin  cid:link_0[root@k8s-master-node1 springcloud]# git add .[root@k8s-master-node1 springcloud]# git commit -m "Initial commit"# On branch masternothing to commit, working directory clean[root@k8s-master-node1 springcloud]# git push -u origin masterUsername for 'http://192.168.100.23:30888': root Password for 'http://root@192.168.100.23:30888':(admin@123)Counting objects: 3192, done.Delta compression using up to 4 threads.Compressing objects: 100% (1428/1428), done.Writing objects: 100% (3192/3192), 1.40 MiB | 0 bytes/s, done.Total 3192 (delta 1233), reused 3010 (delta 1207)remote: Resolving deltas: 100% (1233/1233), done.To cid:link_0 * [new branch]      master -> masterBranch master set up to track remote branch master from origin.2.2.8 配置 Jenkins 与 GitLab 集成在 Jenkins 中新建流水线任务,配置 GitLab 连接 Jenkins,并完成 WebHook 的配置。 * *在 GitLab 中生成名为 jenkins 的“Access Tokens”* * *返回 jenkins*   * *回到 Gitlab ,复制 token* * *复制后填写到此* 2.2.9 构建 CI/CD 环境在流水线任务中编写流水线脚本,完成后触发构建,要求基于 GitLab 中的 项目自动完成代码编译、镜像构建与推送、并自动发布服务到 Kubernetes 集群 中。# 创建命名空间[root@k8s-master-node1 ~]# kubectl create ns springcloud* *新建流水线*    * *添加 Gitlab 用户密码*  * *记住脚本路径的名称 Jenkinsfile ,后面创建的流水线文件名与此匹配* * *Harbor 仓库创建公开项目 springcloud* * *返回 Gitlab 准备编写流水线* # 添加映射[root@k8s-master-node1 ~]# cat /etc/hosts192.168.59.200 apiserver.cluster.local # 选择这一行# 进入jenkins 添加映射[root@k8s-master-node1 ~]# kubectl exec -it -n devops jenkins-bbf477c4f-55vgj bashkubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.root@jenkins-bbf477c4f-55vgj:/# echo "192.168.200.59 apiserver.cluster.local" >> /etc/hostsroot@jenkins-bbf477c4f-55vgj:/# cat /etc/hosts # 查看是否成功 # 编写流水线pipeline{    agent none    stages{        stage('mvn-build'){            agent{                docker{                    image '192.168.3.10/library/maven'                    args '-v /root/.m2:/root/.m2'                }            }            steps{                sh 'cp -rvf /opt/repository /root/.m2'                sh 'mvn package -DskipTests'            }        }        stage('image-build'){            agent any            steps{                sh 'cd gateway && docker build -t 192.168.3.10/springcloud/gateway -f Dockerfile .'                sh 'cd config && docker build -t 192.168.3.10/springcloud/config -f Dockerfile .'                sh 'docker login 192.168.3.10 -u=admin -p=Harbor12345'                sh 'docker push 192.168.3.10/springcloud/gateway'                sh 'docker push 192.168.3.10/springcloud/config'            }        }        stage('cloud-deployment'){            agent any            steps{                sh 'sed -i "s/sqshq\\/piggymetrics-gateway/192.168.3.10\\/springcloud\\/gateway/g" yaml/deployment/gateway-deployment.yaml'                sh 'sed -i "s/sqshq\\/piggymetrics-config/192.168.3.10\\/springcloud\\/config/g" yaml/deployment/config-deployment.yaml'                sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/deployment/gateway-deployment.yaml'                sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/deployment/config-deployment.yaml'                sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/svc/gateway-svc.yaml'                sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/svc/config-svc.yaml'            }        }    }}stages:代表整个流水线的所有执行阶段,通常stages只有1个,里面包含多个stage。stage:代表流水线中的某个阶段,可能出现n个。一般分为拉取代码,编译构建,部署等阶段。steps:代表一个阶段内需要执行的逻辑。steps里面是shell脚本,git拉取代码,ssh远程发布等任意内容。* *保存流水线文件,配置Webhook触发构建*  * *取消勾选 SSL 选择, Add webhook 创建*![](vx_images/545790416256726.png =900x) * *创建成功进行测试,成功后返回 jenkins 会发现流水线已经开始自动构建* * *流水线执行成功* * *springcloud 项目镜像上传成功*# 流水线构建的项目全部运行[root@k8s-master-node1 ~]# kubectl get pod -n springcloud NAME                       READY   STATUS    RESTARTS      AGEconfig-77c74dd878-8kl4x    1/1     Running   0             28sgateway-5b46966894-twv5k   1/1     Running   1 (19s ago)   28s[root@k8s-master-node1 ~]# kubectl -n springcloud get serviceNAME      TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGEconfig    NodePort   10.96.137.40   <none>        8888:30015/TCP   4m3sgateway   NodePort   10.96.121.82   <none>        4000:30010/TCP   4m4s* *等待 PIg 微服务启动,访问 ip:30010 查看构建成功*2.2.10 服务网格:创建 Ingress Gateway将 Bookinfo 应用部署到 default 命名空间下,请为 Bookinfo 应用创建一个网 关,使外部可以访问 Bookinfo 应用。上传ServiceMesh.tar.gz包[root@k8s-master-node1 ~]# tar -zxvf ServiceMesh.tar.gz [root@k8s-master-node1 ~]# cd ServiceMesh/images/[root@k8s-master-node1 images]# docker load -i image.tar 部署Bookinfo应用到kubernetes集群:[root@k8s-master-node1 images]# cd /root/ServiceMesh/[root@k8s-master-node1 ServiceMesh]# kubectl apply -f bookinfo/bookinfo.yamlservice/details createdserviceaccount/bookinfo-details createddeployment.apps/details-v1 createdservice/ratings createdserviceaccount/bookinfo-ratings createddeployment.apps/ratings-v1 createdservice/reviews createdserviceaccount/bookinfo-reviews createddeployment.apps/reviews-v1 createdservice/productpage createdserviceaccount/bookinfo-productpage createddeployment.apps/productpage-v1 created[root@k8s-master-node1 ServiceMesh]# kubectl get podNAME                              READY   STATUS    RESTARTS   AGEdetails-v1-79f774bdb9-kndm9       1/1     Running   0          7sproductpage-v1-6b746f74dc-bswbx   1/1     Running   0          7sratings-v1-b6994bb9-6hqfn         1/1     Running   0          7sreviews-v1-545db77b95-j72x5       1/1     Running   0          7s[root@k8s-master-node1 ServiceMesh]# vim bookinfo-gateway.yamlapiVersion: networking.istio.io/v1alpha3kind: Gatewaymetadata:  name: bookinfo-gatewayspec:  selector:    istio: ingressgateway  servers:  - port:      number: 80      name: http      protocol: HTTP    hosts:     - "*" ---apiVersion: networking.istio.io/v1alpha3kind: VirtualServicemetadata:  name: bookinfospec:  hosts:   - "*"    gateways:  - bookinfo-gateway  http:  - match:     - uri:        exact: /productpage     - uri:        prefix: /static    - uri:        exact: /login    - uri:        exact: /logout    - uri:        prefix: /api/v1/products    route: # 定义路由转发目的地列表    - destination:        host: productpage        port:          number: 9080[root@k8s-master-node1 ServiceMesh]# kubectl apply -f bookinfo-gateway.yamlgateway.networking.istio.io/bookinfo-gateway createdvirtualservice.networking.istio.io/bookinfo created [root@k8s-master-node1 ServiceMesh]#kubectl get VirtualService bookinfo -o yamlbookinfo-gateway || exact: /productpage || destination || host: productpage || number: 9080[root@k8s-master-node1 ServiceMesh]#kubectl get gateway bookinfo-gateway -o yamlistio: ingressgateway2.2.11 KubeVirt 运维:创建 VM使用提供的镜像在 kubevirt 命名空间下创建一台 VM,名称为 exam,指定 VM 的内存、CPU、网卡和磁盘等配置。[root@k8s-master-node1 ~]# kubectl explain kubevirt.spec. --recursive |grep use         useEmulation   <boolean>[root@k8s-master-node1 ~]# kubectl -n kubevirt edit kubevirtspec:  certificateRotateStrategy: {}  configuration:    developerConfiguration: #{}      useEmulation: true[root@k8s-master-node1 ~]# vim vm.yamlapiVersion: kubevirt.io/v1kind: VirtualMachinemetadata:  name: examspec:  running: true  template:    spec:      domain:        devices:          disks:            - name: vm              disk: {}        resources:          requests:            memory: 1Gi      volumes:        - name: vm          containerDisk:            image: fedora-virt:v1.0            imagePullPolicy: IfNotPresent[root@k8s-master-node1 ~]# kubectl apply -f vm.yamlvirtualmachine.kubevirt.io/exam created[root@k8s-master-node1 ~]# kubectl get virtualmachineNAME        AGE   STATUS    READYexam   31s   Running   True[root@k8s-master-node1 ~]# kubectl delete -f vm.yamlvirtualmachine.kubevirt.io "exam" deleted2.2.12 完成容器云平台的调优或排错工作。(本任务只公布考试范围,不公 布赛题) 任务 3 容器云运维开发(10 分)2.3.1 管理 Job 服务Kubernetes Python 运维脚本开发-实现 Job 服务管理。 2.3.2 自定义调度器Kubernetes Python 运维脚本开发-实现调度器管理。 2.3.3 编写 Kubernetes 容器云平台自动化运维工具。(本任务只公布考试范 围,不公布赛题)  
  • [技术干货] 一键部署k8s
    命令两台安装好docker的Linux主机registry  192.168.XX.YYnode1     192.168.XX.YY#仓库机,在将要安装镜像仓库的机子上进行的操作:#拉取registry镜像docker pull registry#在本地创建文件夹,将来将容器内的文件夹映射到此文件夹mkdir /myregistry#基于registry镜像启动容器#  -d 容器后台运行#  -p 5000:5000  将镜像仓库容器提供服务的5000端口映射到宿主机5000端口#  --name  pri_registry 将容器命名为pri_registry#  -v /myregistry:/var/lib/registry 将容器内的/var/lib/registry文件夹映射到宿主机的/myregistry文件夹#  --restart=always  设置容器故障自动重启docker run -d -p 5000:5000 --name pri_registry -v /myregistry:/var/lib/registry --restart=always registry#查看容器docker ps -a#测试一下是否能访问到容器内的catalog文件,如果可以,标识成功,否则失败curl -X GET http://192.168.XX.XX:5000/v2/_catalog###此IP为运行容器的宿主机的IP地址###正确访问执行结果如下:{"repositories":[]}#标识当前仓库中没有任何镜像,为空#以下步骤是从官网拉取一个测试镜像,进行标注,推送到自建的私有仓库docker pull busyboxdocker tag busybox:latest 192.168.XX.XX:5000/busybox:latestdocker push 192.168.XX.XX:5000/busybox:latest#如遇到问题vim /usr/lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd --insecure-registry 192.168.XX.XX:5000systemctl daemon-reloadsystemctl restart dockerdocker restart pri_registry####若要允许远程访问vim /usr/lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --insecure-registry 192.168.XX.XX:5000###此时可以把上面push操作再测试一下docker push 192.168.XX.XX:5000/busybox:latestcurl -X GET http://192.168.XX.XX:5000/v2/_catalog##查看到的结果如下{"repositories":["busybox"]}#表示我们的镜像busybox已经push到私有仓库###查看仓库镜像:ls /myregistry/docker/registry/v2/repositories##客户机,在镜像仓库所在机器之外的另外一台装好了docker的机器上操作docker pull 192.168.XX.XX:5000/busybox#如遇到问题vim /usr/lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd --insecure-registry 192.168.XX.XX:5000systemctl daemon-reloadsystemctl restart docker1  ip a 2  vi /etc/sysconfig/network-scripts/ifcfg-ens333  vi /etc/sysconfig/network-scripts/ifcfg-ens344  systemctl restart network    9  dhclient   10  ping www.baidu.com   11  systemctl stop firewalld   12  setenforce 0   13  ping www.baidu.com   14  vi /etc/sysconfig/network-scripts/ifcfg-ens33   15  systemctl restart network   18  vi /etc/sysconfig/n   30  systemctl stop firewalld   31  systemctl disable firewalld   32  setenforce 0   34  vi /etc/selinux/configdisabled   35  cd /etc/yum.repos.d/   36  ls   37  vi /etc/fstab /dev/cdrom /mnt 9660 defaults 0 0         init 6    38  ll /mnt   39  mount /dev/cdrom /mnt/   40  ll /mnt/   41  ls   42  mkdir backup   43  mv C* backup/   44  ls   45  vi local.repo   46  yum clean all   47  yum makecache   48  yum repolist      55  ll /mnt/   56  umount /mnt/   57  ll /mnt/   58  mount chinaskills_cloud_paas_v2.1.iso /mnt/   59  cp -rvf /mnt/* /opt/   60  umount /mnt/   61  mv /opt/kubeeasy-v2.0 /usr/bin/kubeeasy    65  yum -y install wget   66  sudo wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo   69  wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo          yum makecache   73  yum repolist   75  yum -y install wget   76  mv backup/CentOS-Base.repo ./          mkdir backup   95  cd ../   96  mv C* backup/   97  ls   98  vi local.repo[centos]name=centosbaseurl=file:///mntgpgcheck=0enabled=1    99  yum makecahe  102   yum -y install wget  103  sudo wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo  106  yum install -y yum-utils device-mapper-persistent-data lvm2  107  yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo  108  yum makecache  111  sudo yum remove docker-ce docker-ce-cli containerd.io -y  114  yum install docker-ce-20.10.21 docker-ce-cli-20.10.21 containerd.io   115  systemctl start docker  116  systemctl restart docker  117  docker -v  118  systemctl enable docker  119  cd /etc/docker/  120  ls  121  vi daemon.jsonsudo mkdir -p /etc/dockersudo tee /etc/docker/daemon.json <<EOF{    "registry-mirrors": [        "https://docker.1ms.run",        "https://docker.xuanyuan.me"    ]}EOFsudo systemctl daemon-reloadsudo systemctl restart docker  122  systemctl daemon.json   123  systemctl daemon-reload  124  sudo systemctl restart docker  125  docker run -dit -p 80:80 nginx:latest  126  docker image ls   137  cd  138  ls  139  yum install gcc.x86_64  -y  140  yum install gcc-c++.x86_64 -y  141  ls wget https://nchc.dl.sourceforge.net/project/sshpass/sshpass/1.06/sshpass-1.06.tar.gztar  zxf sshpass-1.06.tar.gzcd sshpass-1.06./configure  --prefix=/usr/local/make && make installyum -y install sshpass    142  tar zxf sshpass-1.10.tar.gz  143   cd sshpass-1.10  144  ./configure  146  make&& make install    147  kubeeasy install depend --host 192.168.122.11,192.168.122.12 --user root --password 123456 --offline-file /opt/dependencies/packages.tar.gz    149  yum -y install rsync  151  sudo wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo   152  yum makecache   154  cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF   155  cd  156  yum makecache  157  yum install kubelet-1.25.2 kubeadm-1.25.2 kubectl-1.25.2 --nogpgcheck -y   153 ( yum -y install kubelet-1.25.2 kubeadm-1.25.2 kubectl–1.25.2 --disableexcludes=kubernetes)  158  yum search kubeadm  159  history  160  yum -y install rsync           yum -y install rsync sudo yum install nfs-utils rpcbindsudo yum install nfs-utils rpcbind sudo systemctl restart nfs-server rpcbind nfs-client.targetsudo systemctl status nfs-server rpcbind   162  kubeeasy install k8s --master 192.168.122.11 --worker 192.168.122.12 --user root --password 123456 --offline-file /opt/kubeeasy.tar.gz    163  kubectl cluster-info  164  kubectl get pods  165  kubectl get nodes [root@k8s-master-node1 ~]#tar -zxvf Hyperf.tar.gz[root@k8s-master-node1 ~]#cd hyperf/[root@k8s-master-node1 hyperf]#docker load -i centos_7.9.2009.tar上传node_exporter-1.7.0.linux-amd64.tar包[root@k8s-master-node1 hyperf]#vim Dockerfile-exporterFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD node_exporter-1.7.0.linux-amd64.tar.gz /root/EXPOSE 9100ENTRYPOINT ["./root/node_exporter-1.7.0.linux-amd64/node_exporter"][root@k8s-master-node1 hyperf]#docker build -t monitor-exporter:v1.0 -f Dockerfile-exporter .2.2.2 容器化部署Alertmanager编写 Dockerfile 文件构建 alert 镜像,要求基于 centos:latest 完成 Alertmanager 服务的安装与配置,并设置服务开机自启。上传alertmanager-0.26.0.linux-amd64.tar包[root@k8s-master-node1 hyperf]#vim Dockerfile-alertFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD alertmanager-0.26.0.linux-amd64.tar.gz /root/EXPOSE 9093 9094ENTRYPOINT ["./root/alertmanager-0.26.0.linux-amd64/alertmanager","--config.file","/root/alertmanager-0.26.0.linux-amd64/alertmanager.yml"][root@k8s-master-node1 hyperf]#docker build -t monitor-alert:v1.0 -f Dockerfile-alert .2.2.3 容器化部署 Grafana编写 Dockerfile 文件构建 grafana 镜像,要求基于 centos 完成 Grafana 服务 的安装与配置,并设置服务开机自启。上传grafana-6.4.1.linux-amd64.tar.gz包[root@k8s-master-node1 hyperf]#vim Dockerfile-grafanaFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD grafana-6.4.1.linux-amd64.tar.gz /root/EXPOSE 3000ENTRYPOINT ["./root/grafana-6.4.1/bin/grafana-server","-homepath","/root/grafana-6.4.1/"][root@k8s-master-node1 hyperf]#docker build -t monitor-grafana:v1.0 -f Dockerfile-grafana .[root@k8s-master-node1 hyperf]#docker run -d --name grafana-exam-jiance monitor-grafana:v1.0 && sleep 5 && docker exec grafana-exam-jiance ps -aux && docker rm -f grafana-exam-jiance2.2.4 容器化部署 Prometheus 编写 Dockerfile 文件构建 prometheus 镜像,要求基于 centos 完成 Promethues 服务的安装与配置,并设置服务开机自启。上传prometheus-2.13.0.linux-amd64.tar.gz并解压[root@k8s-master-node1 hyperf]#tar -zxvf prometheus-2.13.0.linux-amd64.tar.gz[root@k8s-master-node1 hyperf]#mv prometheus-2.13.0.linux-amd64/prometheus.yml /root/hyperf && rm -rf prometheus-2.13.0.linux-amd64[root@k8s-master-node1 hyperf]#vim Dockerfile-prometheusFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD prometheus-2.13.0.linux-amd64.tar.gz /root/RUN mkdir -p /data/prometheus/COPY prometheus.yml /data/prometheus/EXPOSE 9090ENTRYPOINT ["./root/prometheus-2.13.0.linux-amd64/prometheus","--config.file","/data/prometheus/prometheus.yml"][root@k8s-master-node1 hyperf]#docker build -t monitor-prometheus:v1.0 -f Dockerfile-prometheus .[root@k8s-master-node1 hyperf]#vim prometheus.yml #改动  - job_name: 'prometheus'    static_configs:    - targets: ['localhost:9090']   - job_name: 'node'    static_configs:    - targets: ['node:9100']   - job_name: 'alertmanager'    static_configs:    - targets: ['alertmanager:9093']  - job_name: 'node-exporter'    static_configs:    - targets: ['node:9100']2.2.5 编排部署 Prometheus编写 docker-compose.yaml 文件,使用镜像 exporter、alert、grafana 和 prometheus 完成监控系统的编排部署。[root@k8s-master-node1 hyperf]#vim docker-compose.yamlversion: '3'services:  node:    container_name: monitor-node    image: monitor-exporter:v1.0    restart: always    hostname: node    ports:      - 9100:9100  alertmanager:    container_name: monitor-alertmanager    image: monitor-alert:v1.0    depends_on:      - node    restart: always    hostname: alertmanager    links:      - node    ports:      - 9093:9093      - 9094:9094  grafana:    container_name: monitor-grafana    image: monitor-grafana:v1.0    restart: always    depends_on:      - node      - alertmanager    hostname: grafana    links:      - node      - alertmanager    ports:      - 3000:3000  prometheus:    container_name: monitor-prometheus    image: monitor-prometheus:v1.0    restart: always    depends_on:      - node      - alertmanager      - grafana    hostname: prometheus    links:      - node      - alertmanager      - grafana    ports:      - 9090:9090[root@k8s-master-node1 ~]#docker-compose up -d  模块二 容器云(30 分)任务 1 容器云服务搭建(5 分)2.1.1 部署容器云平台使用 OpenStack 私有云平台创建两台云主机,分别作为 Kubernetes 集群的 master 节点和 node 节点,然后完成 Kubernetes 集群的部署,并完成 Istio 服务网 格、KubeVirt 虚拟化和 Harbor 镜像仓库的部署。创建俩台云主机并配网# Kubernetes 集群的部署[root@localhost ~]# mount -o loop chinaskills_cloud_paas_v2.0.2.iso /mnt/[root@localhost ~]# cp -rfv /mnt/* /opt/[root@localhost ~]# umount /mnt/[root@master ~]# hostnamectl set-hostname master && su[root@worker ~]# hostnamectl set-hostname worker && su# 安装kubeeasy[root@master ~]# mv /opt/kubeeasy /usr/bin/kubeeasy# 安装依赖环境[root@master ~]# kubeeasy install depend \--host 192.168.59.200,192.168.59.201 \--user root \--password 000000 \--offline-file /opt/dependencies/base-rpms.tar.gz# 安装k8s[root@master ~]# kubeeasy install k8s \--master 192.168.59.200 \--worker 192.168.59.201 \--user root \--password 000000 \--offline-file /opt/kubernetes.tar.gz# 安装istio网格[root@master ~]# kubeeasy add --istio istio# 安装kubevirt虚拟化[root@master ~]# kubeeasy add --virt kubevirt# 安装harbor仓库[root@master ~]# kubeeasy add --registry harbor[root@k8s-master-node1 ~]# vim pod.yamlapiVersion: v1kind: Podmetadata:  name: examspec:  containers:  - name: exam    image: nginx:latest    imagePullPolicy: IfNotPresent    env:    - name: exam      value: "2022"[root@k8s-master-node1 ~]# kubectl apply -f pod.yaml[root@k8s-master-node1 ~]# kubectl get pod#部署 Istio 服务网格[root@k8s-master-node1 ~]# kubectl create ns examnamespace/exam created[root@k8s-master-node1 ~]# kubectl edit ns exam更改为:  labels:    istio-injection: enabled[root@k8s-master-node1 ~]# kubectl describe ns exam  #查看任务 2 容器云服务运维(15 分)2.2.1 容器化部署 Node-Exporter编写 Dockerfile 文件构建 exporter 镜像,要求基于 centos 完成 Node-Exporter 服务的安装与配置,并设置服务开机自启。yum -y install git上传Hyperf.tar包[root@k8s-master-node1 ~]#tar -zxvf Hyperf.tar.gz[root@k8s-master-node1 ~]#cd hyperf/[root@k8s-master-node1 hyperf]#docker load -i centos_7.9.2009.tar上传node_exporter-1.7.0.linux-amd64.tar包[root@k8s-master-node1 hyperf]#vim Dockerfile-exporterFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD node_exporter-1.7.0.linux-amd64.tar.gz /root/EXPOSE 9100ENTRYPOINT ["./root/node_exporter-1.7.0.linux-amd64/node_exporter"][root@k8s-master-node1 hyperf]#docker build -t monitor-exporter:v1.0 -f Dockerfile-exporter .2.2.2 容器化部署Alertmanager编写 Dockerfile 文件构建 alert 镜像,要求基于 centos:latest 完成 Alertmanager 服务的安装与配置,并设置服务开机自启。上传alertmanager-0.26.0.linux-amd64.tar包[root@k8s-master-node1 hyperf]#vim Dockerfile-alertFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD alertmanager-0.26.0.linux-amd64.tar.gz /root/EXPOSE 9093 9094ENTRYPOINT ["./root/alertmanager-0.26.0.linux-amd64/alertmanager","--config.file","/root/alertmanager-0.26.0.linux-amd64/alertmanager.yml"][root@k8s-master-node1 hyperf]#docker build -t monitor-alert:v1.0 -f Dockerfile-alert .2.2.3 容器化部署 Grafana编写 Dockerfile 文件构建 grafana 镜像,要求基于 centos 完成 Grafana 服务 的安装与配置,并设置服务开机自启。上传grafana-6.4.1.linux-amd64.tar.gz包[root@k8s-master-node1 hyperf]#vim Dockerfile-grafanaFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD grafana-6.4.1.linux-amd64.tar.gz /root/EXPOSE 3000ENTRYPOINT ["./root/grafana-6.4.1/bin/grafana-server","-homepath","/root/grafana-6.4.1/"][root@k8s-master-node1 hyperf]#docker build -t monitor-grafana:v1.0 -f Dockerfile-grafana .[root@k8s-master-node1 hyperf]#docker run -d --name grafana-exam-jiance monitor-grafana:v1.0 && sleep 5 && docker exec grafana-exam-jiance ps -aux && docker rm -f grafana-exam-jiance2.2.4 容器化部署 Prometheus 编写 Dockerfile 文件构建 prometheus 镜像,要求基于 centos 完成 Promethues 服务的安装与配置,并设置服务开机自启。上传prometheus-2.13.0.linux-amd64.tar.gz并解压[root@k8s-master-node1 hyperf]#tar -zxvf prometheus-2.13.0.linux-amd64.tar.gz[root@k8s-master-node1 hyperf]#mv prometheus-2.13.0.linux-amd64/prometheus.yml /root/hyperf && rm -rf prometheus-2.13.0.linux-amd64[root@k8s-master-node1 hyperf]#vim Dockerfile-prometheusFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD prometheus-2.13.0.linux-amd64.tar.gz /root/RUN mkdir -p /data/prometheus/COPY prometheus.yml /data/prometheus/EXPOSE 9090ENTRYPOINT ["./root/prometheus-2.13.0.linux-amd64/prometheus","--config.file","/data/prometheus/prometheus.yml"][root@k8s-master-node1 hyperf]#docker build -t monitor-prometheus:v1.0 -f Dockerfile-prometheus .[root@k8s-master-node1 hyperf]#vim prometheus.yml #改动  - job_name: 'prometheus'    static_configs:    - targets: ['localhost:9090']   - job_name: 'node'    static_configs:    - targets: ['node:9100']   - job_name: 'alertmanager'    static_configs:    - targets: ['alertmanager:9093']  - job_name: 'node-exporter'    static_configs:    - targets: ['node:9100']2.2.5 编排部署 Prometheus编写 docker-compose.yaml 文件,使用镜像 exporter、alert、grafana 和 prometheus 完成监控系统的编排部署。[root@k8s-master-node1 hyperf]#vim docker-compose.yamlversion: '3'services:  node:    container_name: monitor-node    image: monitor-exporter:v1.0    restart: always    hostname: node    ports:      - 9100:9100  alertmanager:    container_name: monitor-alertmanager    image: monitor-alert:v1.0    depends_on:      - node    restart: always    hostname: alertmanager    links:      - node    ports:      - 9093:9093      - 9094:9094  grafana:    container_name: monitor-grafana    image: monitor-grafana:v1.0    restart: always    depends_on:      - node      - alertmanager    hostname: grafana    links:      - node      - alertmanager    ports:      - 3000:3000  prometheus:    container_name: monitor-prometheus    image: monitor-prometheus:v1.0    restart: always    depends_on:      - node      - alertmanager      - grafana    hostname: prometheus    links:      - node      - alertmanager      - grafana    ports:      - 9090:9090[root@k8s-master-node1 ~]#docker-compose up -d2.2.6 安装 Jenkins将 Jenkins 部署到 default 命名空间下。要求完成离线插件的安装,设置 Jenkins 的登录信息和授权策略。上传BlueOcean.tar.gz包[root@k8s-master-node1 ~]#tar -zxvf BlueOcean.tar.gz[root@k8s-master-node1 ~]#cd BlueOcean/images/[root@k8s-master-node1 images]# docker load -i java_8-jre.tar[root@k8s-master-node1 images]# docker load -i jenkins_jenkins_latest.tar[root@k8s-master-node1 images]# docker load -i gitlab_gitlab-ce_latest.tar[root@k8s-master-node1 images]# docker load -i maven_latest.tar[root@k8s-master-node1 images]# docker tag maven:latest  192.168.59.200/library/maven[root@k8s-master-node1 images]# docker login 192.168.59.200Username: adminPassword: (Harbor12345) WARNING! Your password will be stored unencrypted in /root/.docker/config.json.Configure a credential helper to remove this warning. Seehttps://docs.docker.com/engine/reference/commandline/login/#credentials-store[root@k8s-master-node1 images]# docker push  192.168.59.200/library/maven#安装Jenkins[root@k8s-master-node1 BlueOcean]# kubectl create ns devops[root@k8s-master-node1 BlueOcean]# kubectl create deployment jenkins -n devops --image=jenkins/jenkins:latest --port 8080 --dry-run -o yaml > jenkins.yaml[root@k8s-master-node1 BlueOcean]# vim jenkins.yaml # 进入添加apiVersion: apps/v1kind: Deploymentmetadata:  creationTimestamp: null  labels:    app: jenkins  name: jenkins  namespace: devopsspec:  replicas: 1  selector:    matchLabels:      app: jenkins#strategy: {}  template:metadata:#creationTimestamp: null      labels:        app: jenkins    spec:      nodeName: k8s-master-node1 # 强制调度到master节点      containers:      - image: jenkins/jenkins:latest        imagePullPolicy: IfNotPresent        name: jenkins        ports:        - containerPort: 8080          name: jenkins8080        securityContext:          runAsUser: 0          privileged: true        volumeMounts:        - name: jenkins-home          mountPath: /home/jenkins_home/        - name: docker-home          mountPath: /run/docker.sock        - name: docker          mountPath: /usr/bin/docker        - name: kubectl          mountPath: /usr/bin/kubectl        - name: kube          mountPath: /root/.kube      volumes:      - name: jenkins-home        hostPath:          path: /home/jenkins_home/      - name: docker-home        hostPath:          path: /run/docker.sock      - name: docker        hostPath:          path: /usr/bin/docker      - name: kubectl        hostPath:          path: /usr/bin/kubectl      - name: kube        hostPath:          path: /root/.kube[root@k8s-master-node1 BlueOcean]# kubectl apply -f jenkins.yamldeployment.apps/jenkins created[root@k8s-master-node1 ~]# kubectl get pod -n devops NAME                      READY   STATUS    RESTARTS   AGEjenkins-7d4f5696b7-hqw9d   1/1     Running   0          88s# 进入jenkins,确定docker和kubectl成功安装[root@k8s-master-node1 ~]# kubectl exec -it -n default  jenkins-7d4f5696b7-hqw9d bash[root@k8s-master-node1 BlueOcean]# kubectl expose deployment jenkins -n devops --port=8080 --target-port=30880 --dry-run -o yaml >> jenkins.yaml[root@k8s-master-node1 BlueOcean]# vim jenkins.yaml # 进入修改apiVersion: v1kind: Servicemetadata:  creationTimestamp: null  labels:    app: jenkins  name: jenkins  namespace: devopsspec:  ports:  - port: 8080#protocol: TCP    name: jenkins8080    nodePort: 30880  - name: jenkins    port: 50000    nodePort: 30850  selector:    app: jenkins  type: NodePort[root@k8s-master-node1 BlueOcean]# kubectl apply -f jenkins.yamlservice/jenkins created[root@k8s-master-node1 ~]# kubectl get -n devops svcNAME      TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGEjenkins   NodePort   10.96.53.170   <none>        8080:30880/TCP   10s# 使用提供的软件包完成Blue Ocean等离线插件的安装[root@k8s-master-node1 BlueOcean]# kubectl -n devops cp plugins/ jenkins-7d4f5696b7-hqw9d:/var/jenkins_home/* *访问 ip:30880 进入jenkins*# 查看密码[root@k8s-master-node1 BlueOcean]# kubectl -n devops exec jenkins-7d4f5696b7-hqw9d --cat /var/jenkins_home/secrets/initialAdminPassword32c47352c469a4ef58e8a797226949e88 * *前面安装了离线插件,所以这里需要重启 jenkins ,地址栏加入 restart 完成重启*  2.2.7 安装 GitLab将 GitLab 部署到 default 命名空间下,要求设置 root 用户密码,新建公开项 目,并将提供的代码上传到该项目。[root@k8s-master-node1 BlueOcean]# kubectl create deployment gitlab -n devops --image=gitlab/gitlab-ce:latest --port 80 --dry-run -o yaml > gitlab.yamlW0222 12:00:34.346609   25564 helpers.go:555] --dry-run is deprecated and can be replaced with --dry-run=client.[root@k8s-master-node1 BlueOcean]# vim gitlab.yamlapiVersion: apps/v1kind: Deploymentmetadata:  creationTimestamp: null  labels:    app: gitlab  name: gitlab  namespace: devopsspec:  replicas: 1  selector:    matchLabels:      app: gitlab  #strategy: {}  template:    metadata:      #creationTimestamp: null      labels:        app: gitlab    spec:      containers:      - image: gitlab/gitlab-ce:latest         imagePullPolicy: IfNotPresent        name: gitlab-ce        ports:        - containerPort: 80        env:        - name: GITLAB_ROOT_PASSWORD          value: admin@123[root@k8s-master-node1 BlueOcean]# kubectl apply -f gitlab.yamldeployment.apps/gitlab created[root@k8s-master-node1 BlueOcean]# kubectl  get pod -n devopsNAME                      READY   STATUS    RESTARTS      AGEgitlab-5b47c8d994-8s9qb   1/1     Running   0             17sjenkins-bbf477c4f-55vgj   1/1     Running   2 (15m ago)   34m[root@k8s-master-node1 BlueOcean]# kubectl expose deployment gitlab -n devops --port=80 --target-port=30888 --dry-run=client -o yaml >> gitlab.yaml[root@k8s-master-node1 BlueOcean]# vim gitlab.yaml # 进入添加---apiVersion: v1kind: Servicemetadata:  creationTimestamp: null  labels:    app: gitlab  name: gitlab  namespace: devopsspec:  ports:  - port: 80    nodePort: 30888  selector:    app: gitlab  type: NodePort[root@k8s-master-node1 BlueOcean]# kubectl apply -f gitlab.yaml deployment.apps/gitlab configuredservice/gitlab created[root@k8s-master-node1 BlueOcean]# kubectl get svc -n devops NAME      TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGEgitlab    NodePort   10.96.149.160   <none>        80:30888/TCP     6sjenkins   NodePort   10.96.174.123   <none>        8080:30880/TCP   8m7s# 等待gitlab启动,访问IP:30888  root , admin@123 登录 Gitlab*  # 将springcloud文件夹中的代码上传到该项目,Gitlab提供了代码示例[root@k8s-master-node1 BlueOcean]# cd springcloud/[root@k8s-master-node1 springcloud]# git config --global user.name "Administrator"[root@k8s-master-node1 springcloud]# git config --global user.email "admin@example.com"[root@k8s-master-node1 springcloud]# git remote remove origin[root@k8s-master-node1 springcloud]# git remote add origin  cid:link_0[root@k8s-master-node1 springcloud]# git add .[root@k8s-master-node1 springcloud]# git commit -m "Initial commit"# On branch masternothing to commit, working directory clean[root@k8s-master-node1 springcloud]# git push -u origin masterUsername for 'http://192.168.100.23:30888': root Password for 'http://root@192.168.100.23:30888':(admin@123)Counting objects: 3192, done.Delta compression using up to 4 threads.Compressing objects: 100% (1428/1428), done.Writing objects: 100% (3192/3192), 1.40 MiB | 0 bytes/s, done.Total 3192 (delta 1233), reused 3010 (delta 1207)remote: Resolving deltas: 100% (1233/1233), done.To cid:link_0 * [new branch]      master -> masterBranch master set up to track remote branch master from origin.2.2.8 配置 Jenkins 与 GitLab 集成在 Jenkins 中新建流水线任务,配置 GitLab 连接 Jenkins,并完成 WebHook 的配置。 * *在 GitLab 中生成名为 jenkins 的“Access Tokens”* * *返回 jenkins*   * *回到 Gitlab ,复制 token* * *复制后填写到此* 2.2.9 建 CI/CD境在流水线任务中编写流水线脚本,完成后触发构建,要求基于 GitLab 中的 项目自动完成代码编译、镜像构建与推送、并自动发布服务到 Kubernetes 集群 中。# 创建命名空间[root@k8s-master-node1 ~]# kubectl create ns spri * *记住脚本路径的名称 Jenkinsfile ,后面创建的流水线文件名与此匹配* * *Harbor 仓库创建公开项目 springcloud* * *返回 Gitlab 准备编写流水线* # 添加映射[root@k8s-master-node1 ~]# cat /etc/hosts192.168.59.200 apiserver.cluster.local # 选择这一行# 进入jenkins 添加映射[root@k8s-master-node1 ~]# kubectl exec -it -n devops jenkins-bbf477c4f-55vgj bashkubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.root@jenkins-bbf477c4f-55vgj:/# echo "192.168.200.59 apiserver.cluster.local" >> /etc/hostsroot@jenkins-bbf477c4f-55vgj:/# cat /etc/hosts # 查看是否成功 # 编写流水线pipeline{    agent none    stages{        stage('mvn-build'){            agent{                docker{                    image '192.168.3.10/library/maven'                    args '-v /root/.m2:/root/.m2'                }            }            steps{                sh 'cp -rvf /opt/repository /root/.m2'                sh 'mvn package -DskipTests'            }        }        stage('image-build'){            agent any            steps{                sh 'cd gateway && docker build -t 192.168.3.10/springcloud/gateway -f Dockerfile .'                sh 'cd config && docker build -t 192.168.3.10/springcloud/config -f Dockerfile .'                sh 'docker login 192.168.3.10 -u=admin -p=Harbor12345'                sh 'docker push 192.168.3.10/springcloud/gateway'                sh 'docker push 192.168.3.10/springcloud/config'            }        }        stage('cloud-deployment'){            agent any            steps{                sh 'sed -i "s/sqshq\\/piggymetrics-gateway/192.168.3.10\\/springcloud\\/gateway/g" yaml/deployment/gateway-deployment.yaml'                sh 'sed -i "s/sqshq\\/piggymetrics-config/192.168.3.10\\/springcloud\\/config/g" yaml/deployment/config-deployment.yaml'                sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/deployment/gateway-deployment.yaml'                sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/deployment/config-deployment.yaml'                sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/svc/gateway-svc.yaml'                sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/svc/config-svc.yaml'            }        }    }}stages:代表整个流水线的所有执行阶段,通常stages只有1个,里面包含多个stage。stage:代表流水线中的某个阶段,可能出现n个。一般分为拉取代码,编译构建,部署等阶段。steps:代表一个阶段内需要执行的逻辑。steps里面是shell脚本,git拉取代码,ssh远程发布等任意内容。* *保存流水线文件,配置Webhook触发构建*  * *取消勾选 SSL 选择, Add webhook 创建*![](vx_images/545790416256726.png =900x) * *创建成功进行测试,成功后返回 jenkins 会发现流水线已经开始自动构建* * *流水线执行成功* * *springcloud 项目镜像上传成功*#构建的项目全部运行[root@k8s-master-node1 ~]# kubectl get pod -n springcloud NAME                       READY   STATUS    RESTARTS      AGEconfig-77c74dd878-8kl4x    1/1     Running   0             28sgateway-5b46966894-twv5k   1/1     Running   1 (19s ago)   28s[root@k8s-master-node1 ~]# kubectl -n springcloud get serviceNAME      TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGEconfig    NodePort   10.96.137.40   <none>        8888:30015/TCP   4m3sgateway   NodePort   10.96.121.82   <none>        4000:30010/TCP   4m4s* *等待 PIg 微服务启动,访问 ip:30010 查看构建成功*2.2.10 服务网格:创建 Ingress Gateway将 Bookinfo 应用部署到 default 命名空间下,请为 Bookinfo 应用创建一个网 关,使外部可以访问 Bookinfo 应用。上传ServiceMesh.tar.gz包[root@k8s-master-node1 ~]# tar -zxvf ServiceMesh.tar.gz [root@k8s-master-node1 ~]# cd ServiceMesh/images/[root@k8s-master-node1 images]# docker load -i image.tar 部署Bookinfo应用到kubernetes集群:[root@k8s-master-node1 images]# cd /root/ServiceMesh/[root@k8s-master-node1 ServiceMesh]# kubectl apply -f bookinfo/bookinfo.yamlservice/details createdserviceaccount/bookinfo-details createddeployment.apps/details-v1 createdservice/ratings createdserviceaccount/bookinfo-ratings createddeployment.apps/ratings-v1 createdservice/reviews createdserviceaccount/bookinfo-reviews createddeployment.apps/reviews-v1 createdservice/productpage createdserviceaccount/bookinfo-productpage createddeployment.apps/productpage-v1 created[root@k8s-master-node1 ServiceMesh]# kubectl get podNAME                              READY   STATUS    RESTARTS   AGEdetails-v1-79f774bdb9-kndm9       1/1     Running   0          7sproductpage-v1-6b746f74dc-bswbx   1/1     Running   0          7sratings-v1-b6994bb9-6hqfn         1/1     Running   0          7sreviews-v1-545db77b95-j72x5       1/1     Running   0          7s[root@k8s-master-node1 ServiceMesh]# vim bookinfo-gateway.yamlapiVersion: networking.istio.io/v1alpha3kind: Gatewaymetadata:  name: bookinfo-gatewayspec:  selector:    istio: ingressgateway  servers:  - port:      number: 80      name: http      protocol: HTTP    hosts:     - "*" ---apiVersion: networking.istio.io/v1alpha3kind: VirtualServicemetadata:  name: bookinfospec:  hosts:   - "*"    gateways:  - bookinfo-gateway  http:  - match:     - uri:        exact: /productpage     - uri:        prefix: /static    - uri:        exact: /login    - uri:        exact: /logout    - uri:        prefix: /api/v1/products    route: # 定义路由转发目的地列表    - destination:        host: productpage        port:          number: 9080[root@k8s-master-node1 ServiceMesh]# kubectl apply -f bookinfo-gateway.yamlgateway.networking.istio.io/bookinfo-gateway createdvirtualservice.networking.istio.io/bookinfo created [root@k8s-master-node1 ServiceMesh]#kubectl get VirtualService bookinfo -o yamlbookinfo-gateway || exact: /productpage || destination || host: productpage || number: 9080[root@k8s-master-node1 ServiceMesh]#kubectl get gateway bookinfo-gateway -o yamlistio: ingressgateway2.2.11 KubeVirt 运维:创建 VM使用提供的镜像在 kubevirt 命名空间下创建一台 VM,名称为 exam,指定 VM 的内存、CPU、网卡和磁盘等配置。[root@k8s-master-node1 ~]# kubectl explain kubevirt.spec. --recursive |grep use         useEmulation   <boolean>[root@k8s-master-node1 ~]# kubectl -n kubevirt edit kubevirtspec:  certificateRotateStrategy: {}  configuration:    developerConfiguration: #{}      useEmulation: true[root@k8s-master-node1 ~]# vim vm.yamlapiVersion: kubevirt.io/v1kind: VirtualMachinemetadata:  name: examspec:  running: true  template:    spec:      domain:        devices:          disks:            - name: vm              disk: {}        resources:          requests:            memory: 1Gi      volumes:        - name: vm          containerDisk:            image: fedora-virt:v1.0            imagePullPolicy: IfNotPresent[root@k8s-master-node1 ~]# kubectl apply -f vm.yamlvirtualmachine.kubevirt.io/exam created[root@k8s-master-node1 ~]# kubectl get virtualmachineNAME        AGE   STATUS    READYexam   31s   Running   True[root@k8s-master-node1 ~]# kubectl delete -f vm.yamlvirtualmachine.kubevirt.io "exam" deleted2.2.12 完成容器云平台的调优或排错工作。(本任务只公布考试范围,不公 布赛题)  
  • [区域初赛赛题问题] 请问初赛正式赛的数据集会比练习赛的practice.in庞大许多吗
    请问关于这个可以透露吗,担心练习赛没问题到正式赛的时候time_out
  • [课程学习] 云技术精髓活动笔记分享-学习之旅
    1. 引言为了更好地理解现代云计算技术体系,尤其是在企业应用中如何有效地利用云服务,我参加了华为云提供的云技术精髓课程。这门课程涵盖了理论知识和实验操作,帮助我们深刻理解云计算的各项技术,并能真正将其应用到实际工作中。以下是我在学习过程中的内容总结、所获得的技能、以及实践操作过程中的具体心得。2. 理论课学习内容与收获2.1. 云数据存储学习内容:云数据存储是云计算体系中重要的组成部分。课程主要介绍了对象存储、文件存储和块存储三种主要的存储方式:对象存储(OBS):适合备份、归档以及海量非结构化数据的存储。文件存储(NAS):适合需要共享访问的工作负载,如内容管理和媒体处理。块存储(EVS):适用于高性能、低延迟需求,通常用于数据库和高性能应用。学到的知识:通过理论学习,我不仅掌握了不同存储方式的适用场景,还了解了如何根据实际需求进行选择,如何利用存储策略提高数据的可用性和可靠性。2.2. 云网络互联学习内容:云网络互联涉及云资源间的网络通信和跨区域、跨云的互联:虚拟私有云(VPC):创建逻辑隔离的网络环境,安全配置子网、路由和网络ACL。云专线和VPN:实现本地数据中心与云上VPC的安全连接。负载均衡(ELB):提高应用的可用性和性能。学到的知识:理解了VPC的基本概念和配置,以及如何通过云专线和VPN实现跨区域的安全互联。掌握了负载均衡的配置和调优技巧,提升应用稳定性和响应速度。2.3. 云数据库选择和使用学习内容:云数据库包括关系型数据库和非关系型数据库,以及各自的适用场景和管理方法:关系型数据库(RDS):如MySQL、PostgreSQL,适合结构化数据存储。NoSQL数据库:如MongoDB、Redis,适合高并发和灵活数据模型。学到的知识:学会了如何选择合适的数据库类型,根据应用需求调整数据库参数,以及在云平台上管理和维护数据库的方法,如备份和恢复、性能监控和优化。2.4. 云上安全与管理学习内容:安全是云计算的重中之重,每个企业都需要在云上执行严格的安全管理策略:身份和访问管理(IAM):用户权限管理,确保数据和资源的安全访问。安全组和ACL:网络安全策略配置,防止未经授权的访问。日志监控与审计(CloudTrail):对资源操作行为进行跟踪和审核。学到的知识:学会了制定和执行安全策略,掌握了各种安全工具的使用方法,能够在实际工作中确保云上资源和数据的安全。2.5. 分布式云架构学习内容:分布式架构是现代云计算的基础,通过分布式技术实现高可用、可扩展和容错设计:微服务架构:拆分单体应用,提高应用的弹性和可维护性。容器化技术(Docker、Kubernetes):轻量化应用部署,提高资源利用率。分布式数据存储和计算:如HDFS和Spark,处理大规模数据,实现高效计算。学到的知识:了解了微服务架构和容器化部署的基本原理和优势,掌握了在云环境中构建和管理分布式系统的基本技能,为大型项目提供稳定和高效的技术支持。3. 实验课学习内容与实践心得3.1. 计算类服务实践实验内容:云服务器(ECS)创建与管理:快速部署云服务器,安装操作系统和配置网络。弹性伸缩(AS)配置:创建伸缩组和策略,实现服务的自动扩展和收缩。实验过程:ECS创建: 使用华为云控制台创建ECS实例,包括选择镜像、配置实例类型和网络设置。首次创建时遇到了一些配置选项不太明白,通过查阅官方文档和在线社区的帮助,快速解决了这些问题。AS配置: 创建伸缩组,添加ECS实例,配置触发条件。通过模拟高负载场景,验证了自动扩展功能。收获与心得:通过实验,我掌握了云服务器的部署和管理技能,了解了如何根据业务需求配置和调整计算资源。特别是在弹性伸缩的实现上,让我体验到了云服务的灵活和高效。3.2. 存储类服务实践实验内容:对象存储服务(OBS):创建桶,并上传、管理对象。文件存储服务(NAS):配置文件系统,挂载到ECS实例。实验过程:OBS配置: 创建存储桶,上传文件,并进行权限设置。通过SDK进行编程访问,完成了对象的上传和下载操作。NAS挂载: 创建NAS文件系统,配置挂载点。将其挂载到ECS实例上,进行读写测试。遇到的问题与解决:在OBS的权限配置上,第一次遇到跨用户访问的问题,经过查阅ACL(访问控制列表)的配置指南,成功解决了权限管理问题。在NAS挂载时,遇到过挂载失败,通过调整安全组规则,解决了网络访问限制问题。收获与心得:掌握了对象存储和文件存储的使用和管理技能,特别是通过SDK和范围宽广的协议支持,使得数据存储和访问变得更为便捷和高效。通过对权限和网络设置的深入理解,解决实验过程中遇到的各种问题,加强了动手能力。3.3. VPC基础入门实践实验内容:虚拟私有云(VPC)创建与管理:配置子网、路由表和安全组。VPN和云专线配置:实现本地网络与云上VPC的互通。实验过程:VPC设置: 创建VPC,配置子网和路由表,针对不同的应用需求配置安全组策略。特别是在跨子网通信上,遇到了不少路由配置问题,通过逐步调试解决。VPN配置: 设置VPN网关,配置本地、云两端的IPSec通道和隧道参数,成功实现安全连接。遇到的问题与解决:在路由表配置时,初期设定的规则比较混乱,通过反复调整和模拟测试,最终实现稳定的通信。在VPN配置过程中,由于初次接触到IPSec的具体参数配置,花费了较多时间研究和调试,成功后对VPN的管理有了更深层次的理解。收获与心得:通过实践,我不仅掌握了VPC的基础配置和管理技能,还了解了如何通过网络配置实现不同应用环境的隔离和互通。特别是VPN配置的成功,使得我对云资源与本地网络的安全互通有了深刻的体会,也提升了在复杂网络环境下进行配置和调试的综合能力。3.4. 数据库服务实践实验内容:关系型数据库服务(RDS):创建RDS实例,配置数据库。数据库备份与恢复:进行自动和手动备份,测试恢复流程。实验过程:RDS创建与配置: 在华为云中创建RDS实例,选择适合的数据库类型和版本。进行基本的安全配置,包括设置IP白名单,创建用户和数据库。备份与恢复: 进行手动备份,并配置自动备份策略。模拟数据库崩溃,通过备份文件进行恢复操作。遇到的问题与解决:在第一次进行备份恢复操作时,数据库实例因为参数配置错误导致恢复失败,通过查看相关日志和调整配置,最终成功完成恢复。得益于华为云提供的详细文档和在线支持,迅速解决了问题。收获与心得:通过实际操作,我了解了云数据库的创建和管理过程,掌握了备份与恢复的基本技能。特别是在出现问题时,通过日志和配置调试找出解决方案,使得我对数据库故障的应急处理和恢复能力得到了显著提升。3.5. 弹性伸缩基础入门实践实验内容:弹性伸缩(AS):创建伸缩组和策略,实现服务的自动扩展和收缩。告警机制配置:结合云监控服务,实现自动伸缩与告警响应。实验过程:创建伸缩组: 配置伸缩组的基本参数,包括最大、最小实例数,创建告警策略和启动配置。告警与自动伸缩:配置云监控指标和告警规则,通过调整负载手动触发告警,观察自动伸缩的执行效果。遇到的问题与解决:在告警策略配置中,由于对监控指标的不熟悉,导致初次设定的告警触发条件不准确,通过逐步调整和测试,找到了合理的阈值。让自动扩展和缩减能够正确反应负载变化。收获与心得:通过本实验,加深了我对弹性伸缩机制的理解,掌握了如何结合监控和告警策略,实现云资源的动态调节。这对提高业务应用的成本效益和响应速度非常重要,通过反复测试,也提升了我在配置和调优复杂系统过程中的应变能力和细致耐心。4. 总结与感想通过这次华为云的云技术精髓课程学习,我不仅学到了丰富的理论知识,还通过实际操作增强了自己的实践能力。每个实验步骤都让我体会到了细节对整体效果的重要性。在整个学习过程中,虽然遇到了不少问题和挑战,但每一次问题的解决不仅帮助我掌握了新技能,也提升了我的问题解决能力和自信心。未来,我希望能够将所学的云计算技术真正应用到实际工作中,为企业项目提供高效可靠的技术支持。这次学习让我更加明白云计算在现代企业中的重要性和发展前景,也让我感受到技术的不断进步对实际工作的深远影响。相信在未来的工作中,我会继续深入学习和探索云计算技术,为企业的数字化转型贡献自己的力量。