华为云:引领数字化变革的力量在当今数字化浪潮汹涌澎湃的时代,云计算已成为推动企业创新与发展的核心引擎。华为云,作为全球领先的云服务提供商,凭借其卓越的技术实力、广泛的全球布局和丰富多样的产品服务,在云计算领域绽放出耀眼光芒,为众多企业的数字化转型之路铺就坚实基石。华为云提供了全方位、多层次的产品与服务体系,涵盖计算、存储、网络、数据库、容器以及人工智能等多个关键领域。在计算服务方面,弹性云服务器 ECS 如同一位随叫随到的智能助手,可根据企业业务需求灵活调配计算资源,无论是小型电商网站的日常运营,还是大型企业级应用系统的稳定支撑,它都能轻松胜任;而裸金属服务器 BMS 则为对性能和安全有极致要求的关键业务场景保驾护航,给予企业物理隔离的专属计算空间,确保数据处理的高效与安全。存储服务领域,对象存储服务 OBS 宛如一个容量近乎无限的数字仓库,能够安全可靠地收纳海量的图片、视频、文档等非结构化数据,企业无需担忧数据存储容量的瓶颈,并且借助便捷的 RESTful API,数据的上传、下载与管理如同在本地硬盘操作般轻松自如;弹性文件服务 SFS 则专注于为多台云服务器之间搭建起高效的数据共享桥梁,支持多种常见协议,让企业内部不同业务模块之间的协同工作更加顺畅无间。网络服务上,虚拟私有云 VPC 为企业在云端构建起一座座专属的 “网络城堡”,企业能够依据自身业务架构与安全策略,自主定制子网、精心规划路由表、合理设置安全组,实现云资源之间的安全隔离与灵活部署,仿佛在云端拥有了一片完全属于自己的网络领地;弹性公网 IP EIP 则像是打开城堡大门与外界连接的钥匙,赋予云资源独立的公网访问身份,确保企业业务能够在全球互联网的舞台上自由驰骋。数据库服务层面,关系型数据库 RDS 为企业数据的存储与管理提供了稳定可靠的解决方案,支持多种主流数据库引擎,无论是数据的结构化存储、高效查询,还是复杂业务逻辑的处理,都能应对自如;分布式数据库中间件 DDM 则专为应对大数据时代的海量数据挑战而生,助力企业构建大规模分布式数据库集群,实现数据的横向扩展与读写性能的大幅提升,轻松应对高并发业务场景下的数据处理压力。华为云的优势不仅仅体现在其丰富的产品服务上,更在于其深厚的技术底蕴、广泛的全球影响力以及完善的生态系统。华为长期以来在通信和信息技术领域的深耕细作,积累了雄厚的技术资本,这些技术优势源源不断地注入到华为云服务中,确保了云服务的高性能、高可靠性与顶尖的安全防护水平。华为云的数据中心和节点遍布全球各地,形成了一张庞大而高效的云服务网络,能够为不同地区的企业提供低延迟、高可用的云服务体验,有力地支持了跨国企业的全球化业务拓展战略。此外,华为云积极携手广大软件开发商、系统集成商、咨询服务提供商等合作伙伴,共同构建起一个充满活力与创新的云生态系统,在这个生态系统中,各方资源相互交融、优势互补,为企业用户提供了从行业解决方案设计到专业技术服务支持的一站式服务,极大地提升了企业数字化转型的效率与成功率。在实际应用场景中,华为云更是展现出了其强大的适应性与变革性力量。对于传统企业而言,华为云成为了它们数字化转型的得力伙伴,助力企业将传统 IT 架构逐步迁移至云端,实现了 IT 资源的弹性伸缩与成本的精细化管理,让企业能够更加敏捷地响应市场变化,快速推出创新产品与服务;在互联网应用开发与部署领域,华为云为互联网企业提供了充足的计算与存储资源,无论是热门社交应用的海量用户数据处理,还是新兴电商平台的高并发交易支撑,华为云都能确保应用的稳定运行与快速迭代,帮助互联网企业在激烈的市场竞争中抢占先机;在大数据与数据分析场景下,华为云凭借其强大的存储与计算能力,帮助企业轻松应对数据洪流的挑战,实现对海量数据的深度挖掘与精准分析,为企业决策提供了科学依据与数据洞察,让企业在复杂多变的市场环境中能够做出更加明智的战略选择;而在人工智能与机器学习领域,华为云的一站式 AI 开发平台 ModelArts 与强大的昇腾 AI 计算能力相结合,为企业开启了智能化升级的新通道,无论是图像识别技术在安防监控领域的应用,还是自然语言处理技术在智能客服系统中的落地,华为云都在加速推动人工智能技术从实验室走向实际业务场景的进程,助力企业实现智能化转型与创新发展。展望未来,华为云将继续秉持创新与开放的理念,不断深化技术研发与产品创新,持续拓展全球市场与生态合作,为更多企业的数字化转型与创新发展注入源源不断的动力,引领全球云计算产业迈向更加辉煌的未来。create_block_store.py# coding: utf-8import osfrom huaweicloudsdkcore.auth.credentials import BasicCredentialsfrom huaweicloudsdkevs.v2.region.evs_region import EvsRegionfrom huaweicloudsdkcore.exceptions import exceptionsfrom huaweicloudsdkevs.v2 import *if __name__ == "__main__": # 从环境变量中获取 AK 和 SK ak = "79FWNEEYJ5FPOIMQVCHZ" sk = "n6MRBpTarQvjBrRmKH1vE6W3o093BQ4PmKc5Ci7U" if not ak or not sk: print("请先设置环境变量 CLOUD_SDK_AK 和 CLOUD_SDK_SK") exit(1) credentials = BasicCredentials(ak, sk) client = EvsClient.new_builder() \ .with_credentials(credentials) \ .with_region(EvsRegion.value_of("cn-north-4")) \ .build() try: # 列出所有卷 request = ListVolumesRequest() response = client.list_volumes(request) # 查找名为 chinaskills_volume 的卷 volume_id_to_delete = None for volume in response.volumes: if volume.name == "chinaskills_volume": volume_id_to_delete = volume.id print(f"找到卷: {volume.name},ID: {volume.id}") break # 如果找到对应的卷 ID,则删除该卷 if volume_id_to_delete: delete_request = DeleteVolumeRequest(volume_id_to_delete) client.delete_volume(delete_request) print(f"卷 {volume_id_to_delete} 已删除") else: print("未找到名为 chinaskills_volume 的卷,无需删除") request = CreateVolumeRequest() listMetadataVolume = { "__system__cmkid": "4f765277-a00e-4c3a-81f8-2cbaa0856e18", "__system__encrypted]": "1" } volumebody = CreateVolumeOption( availability_zone="cn-north-4a", metadata=listMetadataVolume, multiattach=True, name="chinaskills_volume", size=100, volume_type="SSD" ) request.body = CreateVolumeRequestBody( volume=volumebody ) response = client.create_volume(request) print(response) except exceptions.ClientRequestException as e: print(f"HTTP状态码: {e.status_code}") print(f"请求ID: {e.request_id}") print(f"错误码: {e.error_code}") print(f"错误信息: {e.error_msg}")----------------------------------------------------------------------------------------------------------------create_keypair.py# coding: utf-8from huaweicloudsdkcore.auth.credentials import BasicCredentialsfrom huaweicloudsdkkps.v3.region.kps_region import KpsRegionfrom huaweicloudsdkcore.exceptions import exceptionsfrom huaweicloudsdkkps.v3 import *credentials = BasicCredentials("79FWNEEYJ5FPOIMQVCHZ", "n6MRBpTarQvjBrRmKH1vE6W3o093BQ4PmKc5Ci7U") \client = KpsClient.new_builder() \ .with_credentials(credentials) \ .with_region(KpsRegion.value_of("cn-north-4")) \ .build()def create_key(name): request = ListKeypairsRequest() response = client.list_keypairs(request).to_json_object() for i in response['keypairs']: if i['keypair']['name'] == name: request = DeleteKeypairRequest() request.keypair_name = name response = client.delete_keypair(request) try: request = CreateKeypairRequest() encryptionKeyProtection = Encryption( type="default" ) keyProtectionKeypair = KeyProtection( encryption=encryptionKeyProtection ) keypairbody = CreateKeypairAction( name=name, key_protection=keyProtectionKeypair ) request.body = CreateKeypairRequestBody( keypair=keypairbody ) response = client.create_keypair(request) print(response) except exceptions.ClientRequestException as e: print(e.status_code) print(e.request_id) print(e.error_code) print(e.error_msg)if __name__ == "__main__": create_key('chinaskills_keypair')----------------------------------------------------------------------------------------------------------------ecs_manager.pyimport osimport jsonimport timeimport argparsefrom huaweicloudsdkcore.auth.credentials import BasicCredentialsfrom huaweicloudsdkecs.v2.region.ecs_region import EcsRegionfrom huaweicloudsdkecs.v2 import *ak = "79FWNEEYJ5FPOIMQVCHZ"sk = "n6MRBpTarQvjBrRmKH1vE6W3o093BQ4PmKc5Ci7U"cloud_vpc_id = "b3833f32-f263-4094-b31a-d9ef1d4c5828"cloud_subnet_id = "8e2a2942-29d7-4e83-92a9-c3eb631e5bba"credentials = BasicCredentials(ak, sk)client = EcsClient.new_builder() \ .with_credentials(credentials) \ .with_region(EcsRegion.value_of("cn-north-4")) \ .build()def create_ecs(name, image_id): request = CreatePostPaidServersRequest() rootVolumeServer = PostPaidServerRootVolume(volumetype="SSD") listNicsServer = [PostPaidServerNic(subnet_id=cloud_subnet_id)] serverbody = PostPaidServer( flavor_ref="c7.large.2", image_ref=image_id, name=name, nics=listNicsServer, root_volume=rootVolumeServer, vpcid=cloud_vpc_id ) request.body = CreatePostPaidServersRequestBody(server=serverbody) response = client.create_post_paid_servers(request) print(response.to_dict())def get(name, output_file=None): max_retries = 5 wait_time = 5 for _ in range(max_retries): request = ListServersDetailsRequest() response = client.list_servers_details(request) servers = response.to_dict().get('servers', []) for server in servers: if server['name'] == name: print(f"服务器名称: {server['name']}, 服务器ID: {server['id']}") server_dict = server.to_dict() if hasattr(server, 'to_dict') else dict(server) if output_file: try: with open(output_file, 'w') as f: json.dump(server_dict, f, indent=4, default=str) print(f"服务器信息已保存到 {output_file}") except Exception as e: print(f"保存文件时发生错误: {e}") return print(f"未找到名称为 {name} 的服务器,等待 {wait_time} 秒后重试...") time.sleep(wait_time) print(f"未能找到名称为 {name} 的服务器,已达到最大重试次数。")def getall(output_file=None): request = ListServersDetailsRequest() response = client.list_servers_details(request) servers = response.to_dict().get('servers', []) serializable_servers = [] for server in servers: server_dict = server.to_dict() if hasattr(server, 'to_dict') else dict(server) serializable_servers.append(server_dict) if output_file: try: with open(output_file, 'w') as f: json.dump(serializable_servers, f, indent=4, default=str) print(f"所有服务器信息已保存到 {output_file}") except Exception as e: print(f"保存文件时发生错误: {e}") else: print(serializable_servers)def delete(name): request = ListServersDetailsRequest() response = client.list_servers_details(request) servers = response.to_dict().get('servers', []) for server in servers: if server['name'] == name: print(f"服务器名称: {server['name']}, 服务器ID: {server['id']}") request = DeleteServersRequest() listServersbody = [ServerId(id=server['id'])] request.body = DeleteServersRequestBody(servers=listServersbody) response = client.delete_servers(request) print(response) return print(f"没有找到名称为 {name} 的服务器。")if __name__ == "__main__": parser = argparse.ArgumentParser(description="ECS 云主机管理") subparsers = parser.add_subparsers(dest="command") create_parser = subparsers.add_parser("create") create_parser.add_argument("-i", "--input", required=True, help="云主机名称和镜像 ID,JSON 格式") get_parser = subparsers.add_parser("get") get_parser.add_argument("-n", "--name", required=True, help="指定要查询的云主机名称") get_parser.add_argument("-o", "--output", help="输出文件名,格式为 JSON") getall_parser = subparsers.add_parser("getall") getall_parser.add_argument("-o", "--output", help="输出文件名,格式为 JSON") delete_parser = subparsers.add_parser("delete") delete_parser.add_argument("-n", "--name", required=True, help="指定要删除的云主机名称") args = parser.parse_args() if args.command == "create": input_data = json.loads(args.input) create_ecs(input_data['name'], input_data['imagename']) elif args.command == "get": get(args.name, args.output) elif args.command == "getall": getall(args.output) elif args.command == "delete": delete(args.name)----------------------------------------------------------------------------------------------------------------------------------Huawei_centos7.9_ID.txt02a17486-1214-4e42-8da7-7d200cac585e---------------------------------------------------------------------------------------------------------------------------------main.py#!/usr/bin/env python3# -*- coding: utf-8 -*-# @author YWLBTWTK# @date 2023/9/18from fastapi import FastAPIimport uvicornfrom huaweicloudsdkcore.auth.credentials import BasicCredentialsfrom huaweicloudsdkvpc.v2.region.vpc_region import VpcRegionfrom huaweicloudsdkcore.exceptions import exceptionsfrom huaweicloudsdkvpc.v2 import *app = FastAPI()ak = "79FWNEEYJ5FPOIMQVCHZ"sk = "n6MRBpTarQvjBrRmKH1vE6W3o093BQ4PmKc5Ci7U"g_region = "cn-north-4"def api_list_vpc(name): credentials = BasicCredentials(ak, sk) client = VpcClient.new_builder() \ .with_credentials(credentials) \ .with_region(VpcRegion.value_of(g_region)) \ .build() try: request = ListVpcsRequest() response = list(map(lambda x: {'name': x['name'], 'id': x['id']}, client.list_vpcs(request).to_dict()['vpcs'])) for i in response: if i['name'] == name: print(i['id']) return i['id'] return None except Exception: return None@app.post('/cloud_vpc/create_vpc')def api_create_vpc(data: dict): credentials = BasicCredentials(ak, sk) client = VpcClient.new_builder() \ .with_credentials(credentials) \ .with_region(VpcRegion.value_of(g_region)) \ .build() try: request = CreateVpcRequest() vpcbody = CreateVpcOption( cidr=data['cidr'], name=data['name'] ) request.body = CreateVpcRequestBody( vpc=vpcbody ) response = client.create_vpc(request) print(response) return response except exceptions.ClientRequestException as e: print(e.status_code) print(e.request_id) print(e.error_code) print(e.error_msg) return e except Exception as e: return e@app.get('/cloud_vpc/vpc/{vpc_name}')def api_get_vpc(vpc_name: str): credentials = BasicCredentials(ak, sk) client = VpcClient.new_builder() \ .with_credentials(credentials) \ .with_region(VpcRegion.value_of(g_region)) \ .build() try: request = ShowVpcRequest() request.vpc_id = api_list_vpc(vpc_name) response = client.show_vpc(request) print(response) return response except exceptions.ClientRequestException as e: print(e.status_code) print(e.request_id) print(e.error_code) print(e.error_msg) return e except Exception as e: return e@app.get('/cloud_vpc/vpc')def api_get_list_vpc(): credentials = BasicCredentials(ak, sk) client = VpcClient.new_builder() \ .with_credentials(credentials) \ .with_region(VpcRegion.value_of(g_region)) \ .build() try: request = ListVpcsRequest() response = client.list_vpcs(request).to_dict() print(response) return response except Exception as e: return e@app.put('/cloud_vpc/update_vpc')def api_update_vpc(data: dict): credentials = BasicCredentials(ak, sk) client = VpcClient.new_builder() \ .with_credentials(credentials) \ .with_region(VpcRegion.value_of(g_region)) \ .build() try: request = UpdateVpcRequest() request.vpc_id = api_list_vpc(data['old_name']) vpcbody = UpdateVpcOption( name=data['new_name'] ) request.body = UpdateVpcRequestBody( vpc=vpcbody ) response = client.update_vpc(request) print(response) return response.to_dict() except Exception as e: return e@app.delete('/cloud_vpc/delete_vpc')def api_delete_vpc(data: dict): credentials = BasicCredentials(ak, sk) client = VpcClient.new_builder() \ .with_credentials(credentials) \ .with_region(VpcRegion.value_of(g_region)) \ .build() try: request = DeleteVpcRequest() request.vpc_id = api_list_vpc(data['vpc_name']) response = client.delete_vpc(request) print(response) return response.to_dict() except Exception as e: return eif __name__ == '__main__': uvicorn.run(app='main:app', host='0.0.0.0', port=7045, reload=True)------------------------------------------------------------------------------------------------------------------------------------------python3.txt#!/bin/bashrm -rf /etc/yum.repos.d/*cat << EOF > /etc/yum.repos.d/centos.repo[os]name=Qcloud centos os - \$basearchbaseurl=http://mirrors.cloud.tencent.com/centos/\$releasever/os/\$basearch/enabled=1gpgcheck=1gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7[updates]name=Qcloud centos updates - \$basearchbaseurl=http://mirrors.cloud.tencent.com/centos/\$releasever/updates/\$basearch/enabled=1gpgcheck=1gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7[extras]name=Qcloud centos extras - \$basearchbaseurl=http://mirrors.cloud.tencent.com/centos/\$releasever/extras/\$basearch/enabled=1gpgcheck=1gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7[epel]name=EPEL for redhat/centos \$releasever - \$basearchbaseurl=http://mirrors.cloud.tencent.com/epel/\$releasever/\$basearch/failovermethod=priorityenabled=1gpgcheck=1gpgkey=http://mirrors.cloud.tencent.com/epel/RPM-GPG-KEY-EPEL-7EOFyum install python3 -ypip3 install -i https://mirrors.cloud.tencent.com/pypi/simple --upgrade pippip3 config set global.index-url https://mirrors.cloud.tencent.com/pypi/simplepip3 install huaweicloudsdkcore huaweicloudsdkecs huaweicloudsdkvpc huaweicloudsdkswr huaweicloudsdkcce huaweicloudsdkrds huaweicloudsdkims uvicorn fastapi----------------------------------------------------------------------------------------------------------------------------------date.yamlapiVersion: batch/v1kind: CronJobmetadata: name: date namespace: defaultspec: schedule: "* * * * *" jobTemplate: spec: template: spec: containers: - name: hello image: busybox command: - "sh" - "-c" - "date" restartPolicy: OnFailure----------------------------------------------------------------------------------------------------------------------nginx.yamlapiVersion: v1kind: Podmetadata: name: lifecycle-demo namespace: defaultspec: containers: - name: nginx-container image: nginx:latest lifecycle: postStart: exec: command: ["sh", "-c", "echo Hello from the postStart handler > /usr/share/message"] preStop: exec: command: ["sh", "-c", "nginx -s quit; while killall -0 nginx; do sleep 1; done"] ports: - containerPort: 80---------------------------------------------------------------------------------------------------------------------kubeedge-master.sh#!/bin/bashmaster_ip="139.9.74.141"wget https://216-216.obs.cn-south-1.myhuaweicloud.com/kubernetes_kubeedge.tar.gzecho -e "139.9.74.141 master\n192.168.216.79 edge-node1\n192.168.216.80 edge-node2" >> /etc/hoststar -zxvf kubernetes_kubeedge.tar.gz -C /opt/kubectl patch daemonset kube-proxy -n kube-system -p '{"spec": {"template": {"spec": {"affinity": {"nodeAffinity": {"requiredDuringSchedulingIgnoredDuringExecution": {"nodeSelectorTerms": [{"matchExpressions": [{"key": "node-role.kubernetes.io/edge", "operator": "DoesNotExist"}]}]}}}}}}}'kubectl patch daemonset kube-flannel-ds -n kube-system -p '{"spec": {"template": {"spec": {"affinity": {"nodeAffinity": {"requiredDuringSchedulingIgnoredDuringExecution": {"nodeSelectorTerms": [{"matchExpressions": [{"key": "node-role.kubernetes.io/edge", "operator": "DoesNotExist"}]}]}}}}}}}'mv /opt/kubeedge/keadm /usr/bin/keadmcd /opt/k8simage/bash load.shcd /opt/kubeedgedocker load -i cloudcore.tardocker load -i installation.tardocker load -i mosquitto.tardocker load -i pause.tartar -zxvf /opt/kubeedge/kubeedge-1.11.1.tar.gz# tar -xzvf /opt/kubeedge/kubeedge-v1.11.1-linux-amd64.tar.gzmkdir /etc/kubeedge/cp -rvf /opt/kubeedge/kubeedge-v1.11.1-linux-amd64.tar.gz /etc/kubeedge/cp -rvf /opt/kubeedge/kubeedge-1.11.1/build/crds/ /etc/kubeedge/cp -rvf /opt/kubeedge/kubeedge-1.11.1/vendor/k8s.io/kubernetes/pkg/kubelet/checkpointmanager/checksum /etc/kubeedge/cp /opt/kubeedge/kubeedge-1.11.1/build/tools/* /etc/kubeedge/cd /etc/kubeedge/pwdsleep 5keadm deprecated init --advertise-address=$master_ip --kubeedge-version=1.11.1# ss -ntplnetstat -ntpl |grep cloudcoresleep 5sed -i '/cloudStream:/ {N; s/enable: false/enable: true/}' /etc/kubeedge/config/cloudcore.yamlsed -i '/router:/,+2 s/enable: false/enable: true/' /etc/kubeedge/config/cloudcore.yamlkill -9 $(ps aux | grep cloudcore | awk 'NR==1{print $2}')cp /etc/kubeedge/cloudcore.service /etc/systemd/system/cloudcore.servicechmod +x /etc/systemd/system/cloudcore.servicesystemctl daemon-reloadsystemctl start cloudcoresystemctl enable cloudcore.servicesleep 3export CLOUDCOREIPS=$master_ipcd /etc/kubeedge/./certgen.sh streamiptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -Xsleep 5keadm gettoken-------------------------------------------------------------------------------------------------------------------------------------------kubeedge-node.sh#!/bin/bashtoken=""master_ip="139.9.74.141"centos=http://192.168.216.10:81/centos7.9/echo -e "139.9.74.141 master\n192.168.216.79 edge-node1\n192.168.216.80 edge-node2" >> /etc/hoststar -zxvf kubernetes_kubeedge.tar.gz -C /opt/rm -rf /etc/yum.repos.d/*cat >> /etc/yum.repos.d/local.repo << EOF[local]name=localbaseurl=file:///opt/yumenabled=1gpgcheck=0[centos]name=localbaseurl=$centosenabled=1gpgcheck=0EOFyum -y install docker-cesystemctl restart dockerdocker --versionmv /opt/kubeedge/keadm /usr/bin/keadmcd /opt/k8simage/bash load.shcd /opt/kubeedgedocker load -i cloudcore.tardocker load -i installation.tardocker load -i mosquitto.tardocker load -i pause.tarkeadm join --cloudcore-ipport=$master_ip:10000 --token=$tokensed -i '/edgeStream:/ {N; s/enable: false/enable: true/}' /etc/kubeedge/config/edgecore.yamlsed -i '/serviceBus:/ {N; s/enable: false/enable: true/}' /etc/kubeedge/config/edgecore.yamlsystemctl restart edgecoresystemctl status edgecore-------------------------------------------------------------------------------------------------------------------------------------node.sh#!/bin/bashHOST_IP="192.168.216.15"SQL1="192.168.216.16"rm -rf /etc/yum.repos.d/*cat << EOF > /etc/yum.repos.d/centos.repo[os]name=Qcloud centos os - \$basearchbaseurl=http://mirrors.cloud.tencent.com/centos/\$releasever/os/\$basearch/enabled=1gpgcheck=1gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7[updates]name=Qcloud centos updates - \$basearchbaseurl=http://mirrors.cloud.tencent.com/centos/\$releasever/updates/\$basearch/enabled=1gpgcheck=1gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7[extras]name=Qcloud centos extras - \$basearchbaseurl=http://mirrors.cloud.tencent.com/centos/\$releasever/extras/\$basearch/enabled=1gpgcheck=1gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7[epel]name=EPEL for redhat/centos \$releasever - \$basearchbaseurl=http://mirrors.cloud.tencent.com/epel/\$releasever/\$basearch/failovermethod=priorityenabled=1gpgcheck=1gpgkey=http://mirrors.cloud.tencent.com/epel/RPM-GPG-KEY-EPEL-7EOFtar -zxvf node-v12.16.1-linux-x64.tar.gzecho "export PATH=/usr/local/node/bin:$PATH" >> /etc/profilemv node-v12.16.1-linux-x64 /usr/local/nodesource /etc/profileyum install -y gcc-c++ makeyum install -y GraphicsMagick# 配置 npmsource /etc/profilenpm config set registry http://registry.npm.taobao.org/npm config set electron_mirror http://npm.taobao.org/mirrors/electron/# 解压 Rocket.Chat 安装包tar -zxvf rocket.chat-3.2.2.tgz# 进入服务器程序目录并安装依赖cd bundle/programs/servernpm install# 移动安装包到 /opt/Rocket.Chat 目录mv /root/bundle /opt/Rocket.Chat# 创建 Rocketchat 用户并设置权限useradd -M rocketchat && usermod -L rocketchatchown -R rocketchat:rocketchat /opt/Rocket.Chat# 创建 systemd 服务配置文件cat >> /lib/systemd/system/rocketchat.service << EOF[Unit]Description=The Rocket.Chat serverAfter=network.target remote-fs.target nss-lookup.target nginx.service mongod.service[Service]ExecStart=/usr/local/node/bin/node /opt/Rocket.Chat/main.jsStandardOutput=syslogStandardError=syslogSyslogIdentifier=rocketchatUser=rocketchatEnvironment=MONGO_URL=mongodb://$SQL1:27017/rocketchat?replicaSet=cloud MONGO_OPLOG_URL=mongodb://$SQL1:27017/local?replicaSet=cloud ROOT_URL=http://$HOST_IP:3000/ PORT=3000[Install]WantedBy=multi-user.targetEOF# 启动 Rocket.Chat 服务并检查状态systemctl restart rocketchatsleep 15systemctl status rocketchat-----------------------------------------------------------------------------------------------------------------------------------------------------------sql-1.1.sh#!/bin/bashrm -rf /etc/yum.repos.d/*cat << EOF > /etc/yum.repos.d/centos.repo[os]name=Qcloud centos os - \$basearchbaseurl=http://mirrors.cloud.tencent.com/centos/\$releasever/os/\$basearch/enabled=1gpgcheck=1gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7[updates]name=Qcloud centos updates - \$basearchbaseurl=http://mirrors.cloud.tencent.com/centos/\$releasever/updates/\$basearch/enabled=1gpgcheck=1gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7[extras]name=Qcloud centos extras - \$basearchbaseurl=http://mirrors.cloud.tencent.com/centos/\$releasever/extras/\$basearch/enabled=1gpgcheck=1gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7[epel]name=EPEL for redhat/centos \$releasever - \$basearchbaseurl=http://mirrors.cloud.tencent.com/epel/\$releasever/\$basearch/failovermethod=priorityenabled=1gpgcheck=1gpgkey=http://mirrors.cloud.tencent.com/epel/RPM-GPG-KEY-EPEL-7EOFyum -y install vsftpdecho "anon_root=/opt" >> /etc/vsftpd/vsftpd.confsystemctl restart vsftpdtar -xvf yum.tar.gz -C /optcat >> /etc/yum.repos.d/ftp.repo << EOF[local]name=localbaseurl=file:///opt/yumenabled=1gpgcheck=0EOFyum install mongodb-org -yrm -rf /etc/mongod.confcat >> /etc/mongod.conf << EOF# mongod.conf# for documentation of all options, see:# http://docs.mongodb.org/manual/reference/configuration-options/# where to write logging data.systemLog: destination: file logAppend: true path: /var/log/mongodb/mongod.log# Where and how to store data.storage: dbPath: /var/lib/mongo journal: enabled: true# how the process runsprocessManagement: fork: true # fork and run in background pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile timeZoneInfo: /usr/share/zoneinfo# network interfacesnet: port: 27017 bindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.#security:#operationProfiling:replication: replSetName: cloud#sharding:## Enterprise-Only Options#auditLog:#snmp:EOFsystemctl restart mongod.servicesystemctl status mongod.service------------------------------------------------------------------------------------------------------------------------------------sql-1.2.sh#!/bin/bashSQL1_IP="192.168.216.16"SQL2_IP="192.168.216.17"SQL3_IP="192.168.216.18"mongo --host "$SQL1_IP" --port 27017 --eval "rs.initiate({ _id: 'cloud', members: [ { _id: 0, host: '$SQL1_IP' }, { _id: 1, host: '$SQL2_IP' }, { _id: 2, host: '$SQL3_IP' } ]});"------------------------------------------------------------------------------------------------------------------------------------sql-2.sh#!/bin/bashSQL1_ip="192.168.216.16"rm -rf /etc/yum.repos.d/*cat >> /etc/yum.repos.d/ftp.repo << EOF[local]name=localbaseurl=ftp://$SQL1_ip/yumenabled=1gpgcheck=0EOFyum install mongodb-org -yrm -rf /etc/mongod.confcat >> /etc/mongod.conf << EOF# mongod.conf# for documentation of all options, see:# http://docs.mongodb.org/manual/reference/configuration-options/# where to write logging data.systemLog: destination: file logAppend: true path: /var/log/mongodb/mongod.log# Where and how to store data.storage: dbPath: /var/lib/mongo journal: enabled: true# how the process runsprocessManagement: fork: true # fork and run in background pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile timeZoneInfo: /usr/share/zoneinfo# network interfacesnet: port: 27017 bindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.#security:#operationProfiling:replication: replSetName: cloud#sharding:## Enterprise-Only Options#auditLog:#snmp:EOFsystemctl restart mongod.servicesystemctl status mongod.service----------------------------------------------------------------------------------------------------------------------------------delete-test.txt#!/bin/bash# 删除 WordPress 实例helm uninstall wordpress# 删除 PV 资源kubectl delete pvc data-wordpress-mariadb-0kubectl delete -f wordpress-pv.yamlkubectl delete -f mariadb-pv.yaml# 删除 ChartMuseum 的 Service 和 Deploymentkubectl delete -f chartmuseum-service.yamlkubectl delete -f chartmuseum-deployment.yamlrm -rf /mnt/data/wordpressrm -rf /mnt/data/mariadb# 删除命名空间kubectl delete namespace chartmuseumrm -rf chartmuseum-deployment.yaml chartmuseum-service.yaml linux-amd64 mariadb-pv.yaml wordpress wordpress-pv.yaml-----------------------------------------------------------------------------------------------------------------------------------------wordpress.sh#!/bin/bash# 安装 Helmtar -zxvf helm-v3.3.0-linux-amd64.tar.gzmv linux-amd64/helm /usr/bin# 创建 ChartMuseum 命名空间kubectl create namespace chartmuseum# 部署 ChartMuseum 的 YAML 文件cat <<EOF > chartmuseum-deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: chartmuseum namespace: chartmuseumspec: replicas: 1 selector: matchLabels: app: chartmuseum template: metadata: labels: app: chartmuseum spec: containers: - name: chartmuseum image: registry.cn-hangzhou.aliyuncs.com/starsleap/chartmuseum:latest ports: - containerPort: 8080 volumeMounts: - name: chart-storage mountPath: /charts env: - name: STORAGE value: local - name: STORAGE_LOCAL_ROOTDIR value: /charts volumes: - name: chart-storage hostPath: path: /data/charts type: DirectoryOrCreateEOFcat <<EOF > chartmuseum-service.yamlapiVersion: v1kind: Servicemetadata: name: chartmuseum namespace: chartmuseumspec: type: ClusterIP selector: app: chartmuseum ports: - protocol: TCP port: 8080 targetPort: 8080EOF# 应用 ChartMuseum 的配置kubectl apply -f chartmuseum-deployment.yamlkubectl apply -f chartmuseum-service.yaml# 解压 WordPress charttar -zxvf wordpress-13.0.23.tgz# 修改 values.yaml,将命名空间改为 chartmuseum,并更改 service 类型sed -i 's/type: LoadBalancer/type: NodePort/' wordpress/values.yaml# 创建 WordPress 和 MariaDB 的 PV 目录及设置权限mkdir -p /mnt/data/wordpresschmod 777 /mnt/data/wordpressmkdir -p /mnt/data/mariadbchmod 777 /mnt/data/mariadb# 创建 WordPress 和 MariaDB 的 PV YAML 文件cat <<EOF > wordpress-pv.yamlapiVersion: v1kind: PersistentVolumemetadata: name: wordpress-pvspec: capacity: storage: 10Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: "" hostPath: path: "/mnt/data/wordpress"EOFcat <<EOF > mariadb-pv.yamlapiVersion: v1kind: PersistentVolumemetadata: name: mariadb-pvspec: capacity: storage: 8Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: "" hostPath: path: "/mnt/data/mariadb"EOF# 应用 PV 的配置kubectl apply -f wordpress-pv.yamlkubectl apply -f mariadb-pv.yaml# 安装 WordPress 到 chartmuseum 命名空间helm install wordpress ./wordpress# 获取 WordPress 服务的 NodePort 和 IPexport NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services wordpress)export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")echo "WordPress URL: http://$NODE_IP:$NODE_PORT/"sleep 55kubectl port-forward --namespace default service/wordpress $(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services wordpress):80 &curl -L 127.0.0.1:$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services wordpress)/wp-admin----------------------------------------------------------------------------------------------------------------------------------------------------------------1.txthelm install mariadb-test mariadb-7.3.14.tgz \ --set mariadb.rootPassword="chinaskill" \ --set service.type=ClusterIP \ --set replication.enabled=false---------------------------------------------------------------------------------------------------------------------------------------------------------------------使用kubectl操作集群您需要先下载kubect1以及配置文件,拷贝到您的客户端机器,完成配置后,即可以使用访问kubernetes集群。● 下载kubectl请到kubernetes版本发布页面下载与集群版本对应的或者更新的kubectl。●下载kubectl配置文件请点击此处下载(公网apiserver地址变更后请重新下载)● 安装和配置kubectl以下操作以linux环境为例,更多详细信息,请参见安装和配置kubectl1.拷贝kubect1及其配置文件到您客户端机器的/home目录下2.登录到您的客户端机器,配置kubect11. cd /home2. chmod +x kubectl3. mv -f kubectl /usr/local/bin4. mkdir -p $HOME/.kube5. mv -f kubeconfig.json $HOME/.kube/config图8部署kubectl工具点击图8中的“此处”连接,将kubectl的配置文件kubeconfig.json下载下来。使用CRT等SSH连接工具,通过上面绑定的弹性公网IP地址访问CCE集群的控制节点,将刚刚下载下来的配置文件kubeconfig.json上传到该节点中,并执行mkdir-p $HOME/.kube与mv-f kubeconfig.json $HOME/.kube/config这两个步骤,就完成kubectl工具的部署了。kubectl工具部署完成后,执行kubectl cluster-info命令查看集群信息。[root@k8s-chinaskill-controller ~]# mkdir -p $HOME/.kube[root@k8s-chinaskill-controller ~]# mv -f kubeconfig.json $HOME/.kube/config[root@k8s-chinaskill-controller ~]# kubectl cluster-infoKubernetes control plane is running at https://192.168.0.178:5443CoreDNS is running at https://192.168.0.178:5443/api/v1/namespaces/kube-system/services/coredns:dns/proxyTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
yd_290404665
发表于2024-12-13 21:20:32
2024-12-13 21:20:32
最后回复
12