-
新人提问:镜像服务的以下产品文档的表3列举了Ubuntu/Debian等一系列操作系统弹性云服务器类型与支持的操作系统版本为什么弹性云服务器购买页(如下图)提供的公共镜像比这少很多?是有些支持的操作系统没有相应的公共镜像吗?
-
贵阳一机器,实例ID c768c7a7-9633-47d0-adcf-4ed17a252381 名称notebook-c51aERROR 08-25 09:20:47 [core.py:586] File "/vllm-workspace/LMCache-Ascend/lmcache_ascend/integration/vllm/vllm_v1_adapter.py", line 155, in init_lmcache_engineERROR 08-25 09:20:47 [core.py:586] engine = LMCacheEngineBuilder.get_or_create(ERROR 08-25 09:20:47 [core.py:586] File "/vllm-workspace/LMCache/lmcache/v1/cache_engine.py", line 947, in get_or_createERROR 08-25 09:20:47 [core.py:586] memory_allocator = cls._Create_memory_allocator(config, metadata)ERROR 08-25 09:20:47 [core.py:586] File "/vllm-workspace/LMCache-Ascend/lmcache_ascend/v1/cache_engine.py", line 21, in _ascend_create_memory_allocatorERROR 08-25 09:20:47 [core.py:586] return AscendMixedMemoryAllocator(int(max_local_cpu_size * 1024**3))ERROR 08-25 09:20:47 [core.py:586] File "/vllm-workspace/LMCache-Ascend/lmcache_ascend/v1/memory_management.py", line 69, in __init__ERROR 08-25 09:20:47 [core.py:586] lmc_ops.host_register(self.buffer)ERROR 08-25 09:20:47 [core.py:586] RuntimeError: Unable to pin host memory with error code: -1ERROR 08-25 09:20:47 [core.py:586] Exception raised from halRegisterHostPtr at /vllm-workspace/LMCache-Ascend/csrc/managed_mem.cpp:109 (most recent call first):ERROR 08-25 09:20:47 [core.py:586] frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0xb8 (0xfffc2cf2c908 in /usr/local/python3.10.17/lib/python3.10/site-packages/torch/lib/libc10.so)ERROR 08-25 09:20:47 [core.py:586] frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x6c (0xfffc2cedb404 in /usr/local/python3.10.17/lib/python3.10/site-packages/torch/lib/libc10.so)ERROR 08-25 09:20:47 [core.py:586] frame #2: <unknown function> + 0x1abf8 (0xfff9c407abf8 in /vllm-workspace/LMCache-Ascend/lmcache_ascend/c_ops.cpython-310-aarch64-linux-gnu.so)运行 Lmcache-ascend 遇到了上述问题,主要是由于可以 pin 的 host memory 有限制,原因是 CPU 的内存锁定方法存在问题,系统的内存锁定限制过低,且在容器环境下没有权限执行 ulimit -l unlimited 来提升内存锁定限制。同时无法调整服务的配置,放开内存锁定---以下是参考资料调整 containerd 服务的配置,放开内存锁定的限制,具体步骤如下: 修改 containerd 服务配置文件:找到 containerd 服务的配置文件,通常路径为 /usr/lib/systemd/system/containerd.service(不同系统可能路径有差异,可通过 systemctl status containerd 查看服务配置文件路径)。添加内存锁定限制配置:在配置文件的 [Service] 部分,添加 LimitMEMLOCK=infinity 配置项,该配置项用于设置内存锁定的限制为无限制。
-
RuntimeError: Unable to pin host memory with error code: -1 · Issue #5 · LMCache/LMCache-Ascend在跑 LMCACHE-ASCEND 的时候,发现会出现如上的错误主要的解决方式就是:部署实例的时候使用 LimitMEMLOCK 或者通过 ulimit -l 解决上限但是由于 notebook 中没有 root 权限,所以无法通过后者解决;由于无法使用 docker run 语句 和 docker-compose 所以无法通过前者解决;想问一下要怎么解决这个内存限制问题
-
以下是 dockerfile 的文件# # Copyright (c) 2025 Huawei Technologies Co., Ltd. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # FROM quay.io/ascend/cann:8.2.rc1-910b-openeuler22.03-py3.11 # Set the user ma-user whose UID is 1000 and the user group ma-group whose GID is 100 USER root RUN default_user=$(getent passwd 1000 | awk -F ':' '{print $1}') || echo "uid: 1000 does not exist" && \ default_group=$(getent group 100 | awk -F ':' '{print $1}') || echo "gid: 100 does not exist" && \ if [ ! -z ${default_user} ] && [ ${default_user} != "ma-user" ]; then \ userdel -r ${default_user}; \ fi && \ if [ ! -z ${default_group} ] && [ ${default_group} != "ma-group" ]; then \ groupdel -f ${default_group}; \ fi && \ groupadd -g 100 ma-group && useradd -d /home/ma-user -m -u 1000 -g 100 -s /bin/bash ma-user && \ chmod -R 750 /home/ma-user ARG PIP_INDEX_URL="https://mirrors.aliyun.com/pypi/simple" ARG COMPILE_CUSTOM_KERNELS=1 ENV COMPILE_CUSTOM_KERNELS=${COMPILE_CUSTOM_KERNELS} RUN yum update -y && \ yum install -y python3-pip git vim wget net-tools gcc gcc-c++ make cmake numactl-devel && \ rm -rf /var/cache/yum RUN pip config set global.index-url ${PIP_INDEX_URL} # Set pip source to a faster mirror RUN pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple WORKDIR /workspace COPY . /workspace/LMCache-Ascend/ # Install vLLM ARG VLLM_REPO=https://githubfast.com/vllm-project/vllm.git ARG VLLM_TAG=v0.9.2 RUN git clone --depth 1 $VLLM_REPO --branch $VLLM_TAG /workspace/vllm # In x86, triton will be installed by vllm. But in Ascend, triton doesn't work correctly. we need to uninstall it. RUN VLLM_TARGET_DEVICE="empty" python3 -m pip install -e /workspace/vllm/ --extra-index https://download.pytorch.org/whl/cpu/ --retries 5 --timeout 30 && \ python3 -m pip uninstall -y triton # Install vLLM-Ascend ARG VLLM_ASCEND_REPO=https://githubfast.com/vllm-project/vllm-ascend.git ARG VLLM_ASCEND_TAG=v0.9.2rc1 RUN git clone --depth 1 $VLLM_ASCEND_REPO --branch $VLLM_ASCEND_TAG /workspace/vllm-ascend RUN cd /workspace/vllm-ascend && \ git apply -p1 /workspace/LMCache-Ascend/docker/kv-connector-v1.diff RUN export PIP_EXTRA_INDEX_URL=https://mirrors.huaweicloud.com/ascend/repos/pypi && \ source /usr/local/Ascend/ascend-toolkit/set_env.sh && \ source /usr/local/Ascend/nnal/atb/set_env.sh && \ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/Ascend/ascend-toolkit/latest/`uname -i`-linux/devlib && \ python3 -m pip install -v -e /workspace/vllm-ascend/ --extra-index https://download.pytorch.org/whl/cpu/ # Install modelscope (for fast download) and ray (for multinode) RUN python3 -m pip install modelscope ray # Install LMCache ARG LMCACHE_REPO=https://githubfast.com/LMCache/LMCache.git ARG LMCACHE_TAG=v0.3.3 RUN git clone --depth 1 $LMCACHE_REPO --branch $LMCACHE_TAG /workspace/LMCache # our build is based on arm64 RUN sed -i "s/^infinistore$/infinistore; platform_machine == 'x86_64'/" /workspace/LMCache/requirements/common.txt # Install LMCache with retries and timeout RUN export NO_CUDA_EXT=1 && python3 -m pip install -v -e /workspace/LMCache --retries 5 --timeout 30 # Install LMCache-Ascend RUN cd /workspace/LMCache-Ascend && \ source /usr/local/Ascend/ascend-toolkit/set_env.sh && \ source /usr/local/Ascend/nnal/atb/set_env.sh && \ export SOC_VERSION=ASCEND910B3 && \ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/Ascend/ascend-toolkit/latest/`uname -i`-linux/devlib && \ python3 -m pip install -v --no-build-isolation -e . && \ python3 -m pip cache purge # Switch to user ma-user USER ma-user CMD ["/bin/bash"] 注册的镜像选项如下镜像管理界面创建notebook的参数但是最后创建 notebook 失败了
-
Workspace 于 8 月初发布了创建Windows 11桌面私有镜像的官方指南。cid:link_0我按照指南实践了一下并录制了视频,展示了整个流程中的各个关键步骤与注意事项。 Workspace released an official guide in early August for creating private Windows 11 desktop images.https://support.huaweicloud.com/intl/en-us/usermanual-workspace/workspace_06_1004.htmlI followed the guide and recorded a video demonstrating each key step of the process. 流程视频 | Proccess1 上传ISO文件 | Upload ISO file2 配置ECS & 创建镜像 | Configure ECS & Create image3 更新证书 | Update certificates 注意事项 | NotesWindows 11补丁当前问题较多,部分更新可能会导致虚拟机异常。最近有问题的累积更新补丁如下:2025年8月12日的KB5063878:导致SSD/HDD硬盘故障。2025年6月11日的KB063060:系统补丁更新后蓝屏。2025年6月10日的KB060842:系统补丁更新后蓝屏。建议在执行大规模升级之前限制Windows Update、禁用自动更新Windows Update并验证修补程序。 Windows 11 currently has several problematic patches. Some updates may cause virtual machines to behave abnormally.Recent problematic Cumulative Update patches:August 12, 2025 - KB5063878: Nuked SSD/HDD.June 11, 2025 - KB063060: BSOD after system patch update.June 10, 2025 - KB060842: BSOD after system patch update.It is recommended to restrict Windows Update, disable automatic updates, and verify patches before performing large-scale upgrades. 附件 | Attachments附件包含了视频中使用的所有文件、参考命令,以及完成创建的Workspace镜像。镜像的操作系统版本是Windows 11专业版和Windows 11企业版(但是都没有许可证)。(待补充)关于如何将图像上传到华为云账户和创建Workspace桌面,请参阅下面视频。Win11 Pro 桌面镜像:(待补充)Win11 Enterprise 桌面镜像:(待补充) The attachment contains all the files used in the video, the reference commands, and the Workspace image that was created.The operating system versions of the image are Windows 11 Professional and Windows 11 Enterprise (but neither is licensed).(To be complete) For details about how to upload images to your HUAWEI CLOUD account and create Workspace desktops, see the following videos.Win11 Pro Desktop image:(To be complete)Win11 Enterprise Desktop image:(To be complete) 免责声明 | Disclaimer附件中包含的镜像仅限用于试用、测试和演示。勿直接用于生产环境或牟取商业利益。 The images included in the attachments are for trial, testing, and demonstration purposes only.The images should should not use for production environments or commercial gain.
-
Roxaew勒索病毒篡改重要文件1. 篡改重要文件2. SQLSERVER数据库被锁定3. SQLSERVER备份文件被修改请帮忙,感谢各位
-
,这里面的都是rpm包的文件没有iso镜像文件,有没有打包镜像的指导手册或者这些软件包要怎么打宝成可以安装系统的ISO镜像
-
华为云服务器怎么开启VT,一重启就断开连接进入不了BIOS
-
1概述1.1案例介绍在云原生与远程协作成为主流的软件开发趋势下,开发环境的快速构建、一致性维护及跨团队共享已成为提升效率的关键挑战。C/C++开发因其对系统工具链(如编译器、调试器、第三方库)的高度依赖,环境配置复杂且易出错,传统的手动配置模式难以满足敏捷开发需求。此外,跨地域团队或开源协作场景中,环境差异常导致“本地正常、云端报错”等问题,严重影响交付效率。通过将C/C++开发环境预置为云主机镜像,开发者可一键获取标准化的开发环境,减少重复配置成本,同时为团队协作、CI/CD流水线提供底层支持,显著提升开发流程的可靠性与可复现性。本案例聚焦于利用华为云开发者空间功能(云主机),结合VS Code的IDE工具,定制包含C/C++全工具链(如GCC/Clang、CMake、Conan包管理器)、调试工具(GDB/LLDB)及常用依赖库的云主机镜像。该镜像可直接部署为云主机实例,开发者通过一键部署即可获得开箱即用的编程环境,无需手动安装配置。云主机的弹性资源特性(按需购买开发者专业会员)进一步支持高性能编译与测试场景,而镜像的版本化管理则确保环境更新可追溯、可回滚,与云平台的计算、存储服务无缝集成。1.2适用对象个人开发者高校学生1.3案例时间本案例总时长预计60分钟。1.4案例流程说明:登录云主机,终端Terminal命令安装编译工具链和调试工具,安装VS Code;在VS Code上编写实验代码,并运行测试实验结果;在开发者空间-工作台-我的镜像制作镜像,重置云主机,配置云主机自定义镜像,登录验证实验程序。1.5资源总览本案例预计花费总计0元。资源名称规格单价(元)时长(分钟)开发者空间(云主机)4vCPUs | 8GB RAM ARM UbuntuUbuntu 24.04 Server 定制版免费60VS Code1.97.2免费60实验环境搭建2 实验环境搭建2.1配置云主机登录开发者空间,参考“10分钟玩转云主机” 案例的2.2小节内容完成华为开发者空间云主机申请与配置,云主机配置参数如下:2.2安装编译工具链C/C++运行时环境需要依赖编译器(GCC/G++)和调试工具GDB。因此,在任务开始前我们首先需要确认云主机是否已经安装GCC/G++和GDB。云主机桌面右键,点击Open Terminal Here打开Terminal Emulator窗口。在窗口中输入如下命令进行验证:gcc --versiong++ --versiongdb --version如果云主机中未安装gdb,可通过如下命令进行部署:sudo apt update #更新软件包列表sudo apt install build-essential gdb -yVS Code安装部署3 VS Code安装部署3.1VS Code安装下载.deb在云主机Terminal Emulator窗口,可以通过命令行下载.deb安装Visual Studio Code:sudo wget -O code.deb https://vscode.download.prss.microsoft.com/dbazure/download/stable/e54c774e0add60467559eb0d1e229c6452cf8447/code_1.97.2-1739406006_arm64.deb注:这里使用wget命令下载的deb包会默认下载到当前目录,并且默认命名为code.deb。如果云主机类型是X86,可以通过以下命令下载:sudo wget -O code.deb https://vscode.download.prss.microsoft.com/dbazure/download/stable/e54c774e0add60467559eb0d1e229c6452cf8447/code_1.97.2-1739406807_amd64.deb安装VS Code下载完成后,使用 dpkg 命令来安装下载好的 .deb 包:sudo dpkg -i code.deb注:若在安装过程中遇到依赖问题,可使用以下命令修复依赖:sudo apt-get install -f运行VS Code安装完成后,可以在云桌面左下角所有应用程序 – 开发 – Visual Studio Code点击启动,也可以通过以下命令打开VS Code:code3.2 安装VS Code插件VS Code作为一个款强大的跨平台编辑器,为开发者提供了非常丰富的拓展插件。开发者可以打开VS Code扩展商店,点击插件管理图标,输入插件名称,点击install安装。插件介绍如下:C/C++:语法高亮、智能补全、代码跳转、调试支持(支持跨平台编译和多环境配置),所有 C/C++ 项目的开发基础,无需额外配置即可直接使用。C/C++ Extension Pack:代码编辑、调试、构建和格式化的核心功能,覆盖从代码编写到编译调试的全流程,尤其适合需要跨平台或依赖 CMake 的中大型项目。CMake Tools:集成 CMake 构建系统,支持自动化编译、调试和项目配置,简化跨平台项目管理,适用于复杂工程依赖的构建。Code Runner:一键运行代码片段或文件,支持快速测试和调试,无需手动配置编译命令,实时输出结果到终端。GitLens:增强 Git 功能,支持代码历史追溯、差异对比、作者信息查看,协作开发时快速定位代码变更和提交记录。C/C++ Snippets:快速插入常用代码模板(如循环、条件语句),减少重复编码。3.3 配置开发环境在云主机左下角打开File Manager。在File Manager左上角文件 – 新建文件夹,编辑名称创建所需要的工程目录。点击VS Code左上角File – OpenFolder,选择上一步新建的工程目录cpp_project。在VS Code中,新建 main.cpp 文件,编写示例代码:#include <iostream>using namespace std;int main() {printf("欢迎来到华为云开发者空间!");return 0;}配置构建任务在VS Code中,按 Ctrl+Shift+P 输入 Tasks: Configure Task,选择 Create tasks.json from template → Others。完成上述步骤后,工程目录中自动增加了配置文件tasks.json。双击打开并修改tasks.json:{// See https://go.microsoft.com/fwlink/?LinkId=733558// for the documentation about the tasks.json format"version": "2.0.0","tasks": [{"label": "build","type": "shell","command": "g++","args": ["-g", "-o", "main", "main.cpp"],"group": {"kind": "build","isDefault": true}}]}配置launch.json点击左侧调试图标(或按 Ctrl+Shift+D),选择 Create a launch.json file → C++ (GDB/LLDB)。修改 launch.json:{// Use IntelliSense to learn about possible attributes.// Hover to view descriptions of existing attributes.// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387"version": "0.2.0","configurations": [{"name": "C++ Debug","type": "cppdbg","request": "launch","program": "${workspaceFolder}/main","args": [],"stopAtEntry": false,"cwd": "${workspaceFolder}","environment": [],"externalConsole": false,"MIMode": "gdb","preLaunchTask": "build"}]}修改完成后,左侧工具栏会发生变化如下:编译与调试开发者可以通过左上角的C++ Debug按钮启动编译调试。也可以通过如下方式进行编译调试:编译:按 Ctrl+Shift+B 或运行终端命令: g++ -g -o main main.cpp调试:按 F5 启动调试,支持断点、变量监视等功能。3.4 常见问题解决问题 1:编译器未找到确保已安装 build-essential,检查 PATH 环境变量:echo $PATH问题 2:调试器无法启动确保已安装 gdb:sudo apt install gdb -y问题 3:VS Code 插件安装失败尝试更换网络环境,或手动下载 .vsix 文件安装。4 镜像制作与使用4.1 制作镜像在制作镜像前,我们首先要确认云主机已处于关机状态。在开发者空间,工作台 – 我的云空间模块,点击我的镜像,进入镜像编辑页面。点击制作镜像,设置镜像名称、镜像描述、选择镜像源。点击确定,开发者空间将自动开始制作镜像,制作镜像需要30分钟,请耐心等待。镜像制作完成后,状态变为“就绪”状态。4.2 使用镜像在加载镜像前,需要先确认云主机当前是可配置状态。点击配置云主机,选择私有镜像,下拉选项中选择之前已经制作好的镜像,点击安装按钮。进入桌面,自定义镜像开始加载。等待镜像加载(约3-5分钟),进入云桌面,Terminal Emulator窗口通过code命令启动VS Code,点击运行程序,这里显示程序运行成功。至此,基于开发者空间,定制C/C++开发环境云主机镜像案例内容全部完成。更多案例,点击案例中心查看~
-
1. 控制节点主机名为 controller,设置计算节点主机名为 compute;[root@controller ~]# hostnamectl set-hostname controller && su[root@compute ~]# hostnamectl set-hostname compute && su2.hosts 文件将 IP 地址映射为主机名。[root@controller&&compute ~]#vi /etc/hosts192.168.100.10 controller192.168.100.20 compute[root@controller&&compute ~]#vi /etc/selinux/config更改SELINUX=disabled[root@controller&&compute ~]#setenforce 0[root@controller&&compute ~]#systemctl stop firewalld && systemctl disable firewalld3.配置 yum 源[root@controller ~]rm -rf /etc/yum.repos.d/*[root@controller&&compute ~]# vi /etc/yum.repos.d/http.repo[centos]name=centosbaseurl=http://192.168.133.130/centosgpgcheck=0enabled=1 [openstack]name=openstackbaseurl=http://192.168.133.130/openstack/iaas-repogpgcheck=0enabled=1[root@controller&&compute ~]#yum clean all && yum repolist && yum makecache 2.1.1 部署容器云平台使用 OpenStack 私有云平台创建两台云主机,分别作为 Kubernetes 集群的 master 节点和 node 节点,然后完成 Kubernetes 集群的部署,并完成 Istio 服务网 格、KubeVirt 虚拟化和 Harbor 镜像仓库的部署。创建俩台云主机并配网# Kubernetes 集群的部署[root@localhost ~]# mount -o loop chinaskills_cloud_paas_v2.0.2.iso /mnt/[root@localhost ~]# cp -rfv /mnt/* /opt/[root@localhost ~]# umount /mnt/[root@master ~]# hostnamectl set-hostname master && su[root@worker ~]# hostnamectl set-hostname worker && su# 安装kubeeasy[root@master ~]# mv /opt/kubeeasy /usr/bin/kubeeasy# 安装依赖环境[root@master ~]# kubeeasy install depend \--host 192.168.59.200,192.168.59.201 \--user root \--password 000000 \--offline-file /opt/dependencies/base-rpms.tar.gz# 安装k8s[root@master ~]# kubeeasy install k8s \--master 192.168.59.200 \--worker 192.168.59.201 \--user root \--password 000000 \--offline-file /opt/kubernetes.tar.gz# 安装istio网格[root@master ~]# kubeeasy add --istio istio# 安装kubevirt虚拟化[root@master ~]# kubeeasy add --virt kubevirt# 安装harbor仓库[root@master ~]# kubeeasy add --registry harbor[root@k8s-master-node1 ~]# vim pod.yamlapiVersion: v1kind: Podmetadata: name: examspec: containers: - name: exam image: nginx:latest imagePullPolicy: IfNotPresent env: - name: exam value: "2022"[root@k8s-master-node1 ~]# kubectl apply -f pod.yaml[root@k8s-master-node1 ~]# kubectl get pod#部署 Istio 服务网格[root@k8s-master-node1 ~]# kubectl create ns examnamespace/exam created[root@k8s-master-node1 ~]# kubectl edit ns exam更改为: labels: istio-injection: enabled[root@k8s-master-node1 ~]# kubectl describe ns exam #查看任务 2 容器云服务运维(15 分)2.2.1 容器化部署 Node-Exporter编写 Dockerfile 文件构建 exporter 镜像,要求基于 centos 完成 Node-Exporter 服务的安装与配置,并设置服务开机自启。上传Hyperf.tar包[root@k8s-master-node1 ~]#tar -zxvf Hyperf.tar.gz[root@k8s-master-node1 ~]#cd hyperf/[root@k8s-master-node1 hyperf]#docker load -i centos_7.9.2009.tar上传node_exporter-1.7.0.linux-amd64.tar包[root@k8s-master-node1 hyperf]#vim Dockerfile-exporterFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD node_exporter-1.7.0.linux-amd64.tar.gz /root/EXPOSE 9100ENTRYPOINT ["./root/node_exporter-1.7.0.linux-amd64/node_exporter"][root@k8s-master-node1 hyperf]#docker build -t monitor-exporter:v1.0 -f Dockerfile-exporter .2.2.2 容器化部署Alertmanager编写 Dockerfile 文件构建 alert 镜像,要求基于 centos:latest 完成 Alertmanager 服务的安装与配置,并设置服务开机自启。上传alertmanager-0.26.0.linux-amd64.tar包[root@k8s-master-node1 hyperf]#vim Dockerfile-alertFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD alertmanager-0.26.0.linux-amd64.tar.gz /root/EXPOSE 9093 9094ENTRYPOINT ["./root/alertmanager-0.26.0.linux-amd64/alertmanager","--config.file","/root/alertmanager-0.26.0.linux-amd64/alertmanager.yml"][root@k8s-master-node1 hyperf]#docker build -t monitor-alert:v1.0 -f Dockerfile-alert .2.2.3 容器化部署 Grafana编写 Dockerfile 文件构建 grafana 镜像,要求基于 centos 完成 Grafana 服务 的安装与配置,并设置服务开机自启。上传grafana-6.4.1.linux-amd64.tar.gz包[root@k8s-master-node1 hyperf]#vim Dockerfile-grafanaFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD grafana-6.4.1.linux-amd64.tar.gz /root/EXPOSE 3000ENTRYPOINT ["./root/grafana-6.4.1/bin/grafana-server","-homepath","/root/grafana-6.4.1/"][root@k8s-master-node1 hyperf]#docker build -t monitor-grafana:v1.0 -f Dockerfile-grafana .[root@k8s-master-node1 hyperf]#docker run -d --name grafana-exam-jiance monitor-grafana:v1.0 && sleep 5 && docker exec grafana-exam-jiance ps -aux && docker rm -f grafana-exam-jiance2.2.4 容器化部署 Prometheus 编写 Dockerfile 文件构建 prometheus 镜像,要求基于 centos 完成 Promethues 服务的安装与配置,并设置服务开机自启。上传prometheus-2.13.0.linux-amd64.tar.gz并解压[root@k8s-master-node1 hyperf]#tar -zxvf prometheus-2.13.0.linux-amd64.tar.gz[root@k8s-master-node1 hyperf]#mv prometheus-2.13.0.linux-amd64/prometheus.yml /root/hyperf && rm -rf prometheus-2.13.0.linux-amd64[root@k8s-master-node1 hyperf]#vim Dockerfile-prometheusFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD prometheus-2.13.0.linux-amd64.tar.gz /root/RUN mkdir -p /data/prometheus/COPY prometheus.yml /data/prometheus/EXPOSE 9090ENTRYPOINT ["./root/prometheus-2.13.0.linux-amd64/prometheus","--config.file","/data/prometheus/prometheus.yml"][root@k8s-master-node1 hyperf]#docker build -t monitor-prometheus:v1.0 -f Dockerfile-prometheus .[root@k8s-master-node1 hyperf]#vim prometheus.yml #改动- job_name: 'prometheus' static_configs: - targets: ['localhost:9090'] - job_name: 'node' static_configs: - targets: ['node:9100'] - job_name: 'alertmanager' static_configs: - targets: ['alertmanager:9093'] - job_name: 'node-exporter' static_configs: - targets: ['node:9100']2.2.5 编排部署 Prometheus编写 docker-compose.yaml 文件,使用镜像 exporter、alert、grafana 和 prometheus 完成监控系统的编排部署。[root@k8s-master-node1 hyperf]#vim docker-compose.yaml编排部署prometheusversion: '3'services: node: container_name: monitor-node image: monitor-exporter:v1.0 restart: always hostname: node ports: - 9100:9100 alertmanager: container_name: monitor-alertmanager image: monitor-alert:v1.0 depends_on: - node restart: always hostname: alertmanager links: - node ports: - 9093:9093 - 9094:9094 grafana: container_name: monitor-grafana image: monitor-grafana:v1.0 restart: always depends_on: - node - alertmanager hostname: grafana links: - node - alertmanager ports: - 3000:3000 prometheus: container_name: monitor-prometheus image: monitor-prometheus:v1.0 restart: always depends_on: - node - alertmanager - grafana hostname: prometheus links: - node - alertmanager - grafana ports: - 9090:9090[root@k8s-master-node1 ~]#docker-compose up -d 2.2.6 安装 Jenkins将 Jenkins 部署到 default 命名空间下。要求完成离线插件的安装,设置 Jenkins 的登录信息和授权策略。上传BlueOcean.tar.gz包[root@k8s-master-node1 ~]#tar -zxvf BlueOcean.tar.gz[root@k8s-master-node1 ~]#cd BlueOcean/images/[root@k8s-master-node1 images]# docker load -i java_8-jre.tar[root@k8s-master-node1 images]# docker load -i jenkins_jenkins_latest.tar[root@k8s-master-node1 images]# docker load -i gitlab_gitlab-ce_latest.tar[root@k8s-master-node1 images]# docker load -i maven_latest.tar[root@k8s-master-node1 images]# docker tag maven:latest 192.168.59.200/library/maven[root@k8s-master-node1 images]# docker login 192.168.59.200Username: adminPassword: (Harbor12345)WARNING! Your password will be stored unencrypted in /root/.docker/config.json.Configure a credential helper to remove this warning. Seehttps://docs.docker.com/engine/reference/commandline/login/#credentials-store[root@k8s-master-node1 images]# docker push 192.168.59.200/library/maven#安装Jenkins[root@k8s-master-node1 BlueOcean]# kubectl create ns devops[root@k8s-master-node1 BlueOcean]# kubectl create deployment jenkins -n devops --image=jenkins/jenkins:latest --port 8080 --dry-run -o yaml > jenkins.yaml[root@k8s-master-node1 BlueOcean]# vim jenkins.yaml # 进入添加apiVersion: apps/v1kind: Deploymentmetadata: creationTimestamp: null labels: app: jenkins name: jenkins namespace: devopsspec: replicas: 1 selector: matchLabels: app: jenkins strategy: {} template: metadata: creationTimestamp: null labels: app: jenkins spec: nodeName: k8s-master-node1 containers: - image: jenkins/jenkins:latest imagePullPolicy: IfNotPresent name: jenkins ports: - containerPort: 8080 name: jenkins8080 securityContext: runAsUser: 0 privileged: true volumeMounts: - name: jenkins-home mountPath: /home/jenkins_home/ - name: docker-home mountPath: /run/docker.sock - name: docker mountPath: /usr/bin/docker - name: kubectl mountPath: /usr/bin/kubectl - name: kube mountPath: /root/.kube volumes: - name: jenkins-home hostPath: path: /home/jenkins_home/ - name: docker-home hostPath: path: /run/docker.sock - name: docker hostPath: path: /usr/bin/docker - name: kubectl hostPath: path: /usr/bin/kubectl - name: kube hostPath: path: /root/.kube[root@k8s-master-node1 BlueOcean]# kubectl apply -f jenkins.yamldeployment.apps/jenkins created[root@k8s-master-node1 ~]# kubectl get pod -n devops NAME READY STATUS RESTARTS AGEjenkins-7d4f5696b7-hqw9d 1/1 Running 0 88s# 进入jenkins,确定docker和kubectl成功安装[root@k8s-master-node1 ~]# kubectl exec -it -n devops jenkins-7d4f5696b7-hqw9d bash[root@k8s-master-node1 BlueOcean]# kubectl expose deployment jenkins -n devops --port=8080 --target-port=30880 --dry-run -o yaml >> jenkins.yaml[root@k8s-master-node1 BlueOcean]# vim jenkins.yaml # 进入修改第二次粘贴在第一此的后面apiVersion: v1kind: Servicemetadata: creationTimestamp: null labels: app: jenkins name: jenkins namespace: devopsspec: ports: - port: 8080 protocol: TCP name: jenkins8080 nodePort: 30880 - name: jenkins port: 50000 nodePort: 30850 selector: app: jenkins type: NodePort[root@k8s-master-node1 BlueOcean]# kubectl apply -f jenkins.yamlservice/jenkins created[root@k8s-master-node1 ~]# kubectl get -n devops svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEjenkins NodePort 10.96.53.170 <none> 8080:30880/TCP 10s# 使用提供的软件包完成Blue Ocean等离线插件的安装[root@k8s-master-node1 BlueOcean]# kubectl -n devops cp plugins/ jenkins-7d4f5696b7-hqw9d:/var/jenkins_home/* *访问 ip:30880 进入jenkins*# 查看密码[root@k8s-master-node1 BlueOcean]# kubectl -n devops exec jenkins-7d4f5696b7-hqw9d --cat /var/jenkins_home/secrets/initialAdminPassword 2.2.7 安装 GitLab 将 GitLab 部署到 default 命名空间下,要求设置 root 用户密码,新建公开项 目,并将提供的代码上传到该项目。[root@k8s-master-node1 BlueOcean]# kubectl create deployment gitlab -n devops --image=gitlab/gitlab-ce:latest --port 80 --dry-run -o yaml > gitlab.yamlW0222 12:00:34.346609 25564 helpers.go:555] --dry-run is deprecated and can be replaced with --dry-run=client.[root@k8s-master-node1 BlueOcean]# vim gitlab.yamljitlab的配置文件apiVersion: apps/v1kind: Deploymentmetadata: creationTimestamp: null labels: app: gitlab name: gitlab namespace: devopsspec: replicas: 1 selector: matchLabels: app: gitlab strategy: {} template: metadata: creationTimestamp: null labels: app: gitlab spec: containers: - image: gitlab/gitlab-ce:latest imagePullPolicy: IfNotPresent name: gitlab-ce ports: - containerPort: 80 env: - name: GITLAB_ROOT_PASSWORD value: admin@123[root@k8s-master-node1 BlueOcean]# kubectl apply -f gitlab.yamldeployment.apps/gitlab created[root@k8s-master-node1 BlueOcean]# kubectl get pod -n devopsNAME READY STATUS RESTARTS AGEgitlab-5b47c8d994-8s9qb 1/1 Running 0 17sjenkins-bbf477c4f-55vgj 1/1 Running 2 (15m ago) 34m[root@k8s-master-node1 BlueOcean]# kubectl expose deployment gitlab -n devops --port=80 --target-port=30888 --dry-run=client -o yaml >> gitlab.yaml[root@k8s-master-node1 BlueOcean]# vim gitlab.yaml # 进入添加---apiVersion: v1kind: Servicemetadata: creationTimestamp: null labels: app: gitlab name: gitlab namespace: devopsspec: ports: - port: 80 nodePort: 30888 selector: app: gitlab type: NodePort[root@k8s-master-node1 BlueOcean]# kubectl apply -f gitlab.yamldeployment.apps/gitlab configuredservice/gitlab created[root@k8s-master-node1 BlueOcean]# kubectl get svc -n devopsNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEgitlab NodePort 10.96.149.160 <none> 80:30888/TCP 6sjenkins NodePort 10.96.174.123 <none> 8080:30880/TCP 8m7s# 等待gitlab启动,访问IP:30888 root , admin@123 登录 Gitlab* # 将springcloud文件夹中的代码上传到该项目,Gitlab提供了代码示例[root@k8s-master-node1 BlueOcean]# cd springcloud/[root@k8s-master-node1 springcloud]# git config --global user.name "Administrator"[root@k8s-master-node1 springcloud]# git config --global user.email "admin@example.com"[root@k8s-master-node1 springcloud]# git remote remove origin[root@k8s-master-node1 springcloud]# git remote add origin http://192.168.100.23:30888/root/springcloud.git[root@k8s-master-node1 springcloud]# git add .[root@k8s-master-node1 springcloud]# git commit -m "Initial commit"# On branch masternothing to commit, working directory clean[root@k8s-master-node1 springcloud]# git push -u origin masterUsername for 'http://192.168.100.23:30888': root Password for 'http://root@192.168.100.23:30888':(admin@123)Counting objects: 3192, done.Delta compression using up to 4 threads.Compressing objects: 100% (1428/1428), done.Writing objects: 100% (3192/3192), 1.40 MiB | 0 bytes/s, done.Total 3192 (delta 1233), reused 3010 (delta 1207)remote: Resolving deltas: 100% (1233/1233), done.To http://192.168.100.23:30888/root/springcloud.git * [new branch] master -> masterBranch master set up to track remote branch master from origin. 2.2.8 配置 Jenkins 与 GitLab 集成在 Jenkins 中新建流水线任务,配置 GitLab 连接 Jenkins,并完成 WebHook 的配置。 * 在 GitLab 中生成名为 jenkins 的“Access Tokens” * 返回 jenkins * 回到 Gitlab ,复制 token * 复制后填写到此 2.2.9 构建 CI/CD 环境在流水线任务中编写流水线脚本,完成后触发构建,要求基于 GitLab 中的 项目自动完成代码编译、镜像构建与推送、并自动发布服务到 Kubernetes 集群 中。# 创建命名空间[root@k8s-master-node1 ~]# kubectl create ns springcloud* *新建流水线* * *添加 Gitlab 用户密码* * Harbor 仓库创建公开项目 springcloud * *返回 Gitlab 准备编写流水线* # 添加映射[root@k8s-master-node1 ~]# cat /etc/hosts192.168.59.200 apiserver.cluster.local # 选择这一行# 进入jenkins 添加映射[root@k8s-master-node1 ~]# kubectl exec -it -n devops jenkins-bbf477c4f-55vgj bashkubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.root@jenkins-bbf477c4f-55vgj:/# echo "192.168.200.59 apiserver.cluster.local" >> /etc/hostsroot@jenkins-bbf477c4f-55vgj:/# cat /etc/hosts # 查看是否成功 # 编写流水线pipeline{ agent none stages{ stage('mvn-build'){ agent{ docker{ image '192.168.3.10/library/maven' args '-v /root/.m2:/root/.m2' } } steps{ sh 'cp -rvf /opt/repository /root/.m2' sh 'mvn package -DskipTests' } } stage('image-build'){ agent any steps{ sh 'cd gateway && docker build -t 192.168.3.10/springcloud/gateway -f Dockerfile .' sh 'cd config && docker build -t 192.168.3.10/springcloud/config -f Dockerfile .' sh 'docker login 192.168.3.10 -u=admin -p=Harbor12345' sh 'docker push 192.168.3.10/springcloud/gateway' sh 'docker push 192.168.3.10/springcloud/config' } } stage('cloud-deployment'){ agent any steps{ sh 'sed -i "s/sqshq\\/piggymetrics-gateway/192.168.3.10\\/springcloud\\/gateway/g" yaml/deployment/gateway-deployment.yaml' sh 'sed -i "s/sqshq\\/piggymetrics-config/192.168.3.10\\/springcloud\\/config/g" yaml/deployment/config-deployment.yaml' sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/deployment/gateway-deployment.yaml' sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/deployment/config-deployment.yaml' sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/svc/gateway-svc.yaml' sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/svc/config-svc.yaml' } } }}stages:代表整个流水线的所有执行阶段,通常stages只有1个,里面包含多个stage。stage:代表流水线中的某个阶段,可能出现n个。一般分为拉取代码,编译构建,部署等阶段。steps:代表一个阶段内需要执行的逻辑。steps里面是shell脚本,git拉取代码,ssh远程发布等任意内容。* *保存流水线文件,配置Webhook触发构建* * *取消勾选 SSL 选择, Add webhook 创建* * 创建成功进行测试,成功后返回 jenkins 会发现流水线已经开始自动构建 * 流水线执行成功 # 流水线构建的项目全部运行[root@k8s-master-node1 ~]# kubectl get pod -n springcloudNAME READY STATUS RESTARTS AGEconfig-77c74dd878-8kl4x 1/1 Running 0 28sgateway-5b46966894-twv5k 1/1 Running 1 (19s ago) 28s[root@k8s-master-node1 ~]# kubectl -n springcloud get serviceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEconfig NodePort 10.96.137.40 <none> 8888:30015/TCP 4m3sgateway NodePort 10.96.121.82 <none> 4000:30010/TCP 4m4s* *等待 PIg 微服务启动,访问 ip:30010 查看构建成功*2.2.10 服务网格:创建 Ingress Gateway将 Bookinfo 应用部署到 default 命名空间下,请为 Bookinfo 应用创建一个网 关,使外部可以访问 Bookinfo 应用。上传ServiceMesh.tar.gz包[root@k8s-master-node1 ~]# tar -zxvf ServiceMesh.tar.gz[root@k8s-master-node1 ~]# cd ServiceMesh/images/[root@k8s-master-node1 images]# docker load -i image.tar部署Bookinfo应用到kubernetes集群:[root@k8s-master-node1 images]# cd /root/ServiceMesh/[root@k8s-master-node1 ServiceMesh]# kubectl apply -f bookinfo/bookinfo.yamlservice/details createdserviceaccount/bookinfo-details createddeployment.apps/details-v1 createdservice/ratings createdserviceaccount/bookinfo-ratings createddeployment.apps/ratings-v1 createdservice/reviews createdserviceaccount/bookinfo-reviews createddeployment.apps/reviews-v1 createdservice/productpage createdserviceaccount/bookinfo-productpage createddeployment.apps/productpage-v1 created[root@k8s-master-node1 ServiceMesh]# kubectl get podNAME READY STATUS RESTARTS AGEdetails-v1-79f774bdb9-kndm9 1/1 Running 0 7sproductpage-v1-6b746f74dc-bswbx 1/1 Running 0 7sratings-v1-b6994bb9-6hqfn 1/1 Running 0 7sreviews-v1-545db77b95-j72x5 1/1 Running 0 7s[root@k8s-master-node1 ServiceMesh]# vim bookinfo-gateway.yamlapiVersion: networking.istio.io/v1alpha3kind: Gatewaymetadata: name: bookinfo-gatewayspec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*"---apiVersion: networking.istio.io/v1alpha3kind: VirtualServicemetadata: name: bookinfospec: hosts: - "*" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: # 定义路由转发目的地列表 - destination: host: productpage port: number: 9080[root@k8s-master-node1 ServiceMesh]# kubectl apply -f bookinfo-gateway.yamlgateway.networking.istio.io/bookinfo-gateway createdvirtualservice.networking.istio.io/bookinfo created[root@k8s-master-node1 ServiceMesh]#kubectl get VirtualService bookinfo -o yamlbookinfo-gateway || exact: /productpage || destination || host: productpage || number: 9080[root@k8s-master-node1 ServiceMesh]#kubectl get gateway bookinfo-gateway -o yamlistio: ingressgateway2.2.11 KubeVirt 运维:创建 VM使用提供的镜像在 kubevirt 命名空间下创建一台 VM,名称为 exam,指定 VM 的内存、CPU、网卡和磁盘等配置。[root@k8s-master-node1 ~]# kubectl explain kubevirt.spec. --recursive |grep use useEmulation <boolean>[root@k8s-master-node1 ~]# kubectl -n kubevirt edit kubevirtspec: certificateRotateStrategy: {} configuration: developerConfiguration: #{} useEmulation: true[root@k8s-master-node1 ~]# vim vm.yamlapiVersion: kubevirt.io/v1kind: VirtualMachinemetadata: name: examspec: running: true template: spec: domain: devices: disks: - name: vm disk: {} resources: requests: memory: 1Gi volumes: - name: vm containerDisk: image: fedora-virt:v1.0 imagePullPolicy: IfNotPresent[root@k8s-master-node1 ~]# kubectl apply -f vm.yamlvirtualmachine.kubevirt.io/exam created[root@k8s-master-node1 ~]# kubectl get virtualmachineNAME AGE STATUS READY exam 31s Running True[root@k8s-master-node1 ~]# kubectl delete -f vm.yamlvirtualmachine.kubevirt.io "exam" deleted
-
1. 控制节点主机名为 controller,设置计算节点主机名为 compute;[root@controller ~]# hostnamectl set-hostname controller && su[root@compute ~]# hostnamectl set-hostname compute && su2.hosts 文件将 IP 地址映射为主机名。[root@controller&&compute ~]#vi /etc/hosts192.168.100.10 controller192.168.100.20 compute[root@controller&&compute ~]#vi /etc/selinux/config更改SELINUX=disabled[root@controller&&compute ~]#setenforce 0[root@controller&&compute ~]#systemctl stop firewalld && systemctl disable firewalld3.配置 yum 源[root@controller ~]rm -rf /etc/yum.repos.d/*[root@controller&&compute ~]# vi /etc/yum.repos.d/http.repo[centos]name=centosbaseurl=http://192.168.133.130/centosgpgcheck=0enabled=1 [openstack]name=openstackbaseurl=http://192.168.133.130/openstack/iaas-repogpgcheck=0enabled=1[root@controller&&compute ~]#yum clean all && yum repolist && yum makecache 2.1.1 部署容器云平台使用 OpenStack 私有云平台创建两台云主机,分别作为 Kubernetes 集群的 master 节点和 node 节点,然后完成 Kubernetes 集群的部署,并完成 Istio 服务网 格、KubeVirt 虚拟化和 Harbor 镜像仓库的部署。创建俩台云主机并配网# Kubernetes 集群的部署[root@localhost ~]# mount -o loop chinaskills_cloud_paas_v2.0.2.iso /mnt/[root@localhost ~]# cp -rfv /mnt/* /opt/[root@localhost ~]# umount /mnt/[root@master ~]# hostnamectl set-hostname master && su[root@worker ~]# hostnamectl set-hostname worker && su# 安装kubeeasy[root@master ~]# mv /opt/kubeeasy /usr/bin/kubeeasy# 安装依赖环境[root@master ~]# kubeeasy install depend \--host 192.168.59.200,192.168.59.201 \--user root \--password 000000 \--offline-file /opt/dependencies/base-rpms.tar.gz# 安装k8s[root@master ~]# kubeeasy install k8s \--master 192.168.59.200 \--worker 192.168.59.201 \--user root \--password 000000 \--offline-file /opt/kubernetes.tar.gz# 安装istio网格[root@master ~]# kubeeasy add --istio istio# 安装kubevirt虚拟化[root@master ~]# kubeeasy add --virt kubevirt# 安装harbor仓库[root@master ~]# kubeeasy add --registry harbor[root@k8s-master-node1 ~]# vim pod.yamlapiVersion: v1kind: Podmetadata: name: examspec: containers: - name: exam image: nginx:latest imagePullPolicy: IfNotPresent env: - name: exam value: "2022"[root@k8s-master-node1 ~]# kubectl apply -f pod.yaml[root@k8s-master-node1 ~]# kubectl get pod#部署 Istio 服务网格[root@k8s-master-node1 ~]# kubectl create ns examnamespace/exam created[root@k8s-master-node1 ~]# kubectl edit ns exam更改为: labels: istio-injection: enabled[root@k8s-master-node1 ~]# kubectl describe ns exam #查看任务 2 容器云服务运维(15 分)2.2.1 容器化部署 Node-Exporter编写 Dockerfile 文件构建 exporter 镜像,要求基于 centos 完成 Node-Exporter 服务的安装与配置,并设置服务开机自启。上传Hyperf.tar包[root@k8s-master-node1 ~]#tar -zxvf Hyperf.tar.gz[root@k8s-master-node1 ~]#cd hyperf/[root@k8s-master-node1 hyperf]#docker load -i centos_7.9.2009.tar上传node_exporter-1.7.0.linux-amd64.tar包[root@k8s-master-node1 hyperf]#vim Dockerfile-exporterFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD node_exporter-1.7.0.linux-amd64.tar.gz /root/EXPOSE 9100ENTRYPOINT ["./root/node_exporter-1.7.0.linux-amd64/node_exporter"][root@k8s-master-node1 hyperf]#docker build -t monitor-exporter:v1.0 -f Dockerfile-exporter .2.2.2 容器化部署Alertmanager编写 Dockerfile 文件构建 alert 镜像,要求基于 centos:latest 完成 Alertmanager 服务的安装与配置,并设置服务开机自启。上传alertmanager-0.26.0.linux-amd64.tar包[root@k8s-master-node1 hyperf]#vim Dockerfile-alertFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD alertmanager-0.26.0.linux-amd64.tar.gz /root/EXPOSE 9093 9094ENTRYPOINT ["./root/alertmanager-0.26.0.linux-amd64/alertmanager","--config.file","/root/alertmanager-0.26.0.linux-amd64/alertmanager.yml"][root@k8s-master-node1 hyperf]#docker build -t monitor-alert:v1.0 -f Dockerfile-alert .2.2.3 容器化部署 Grafana编写 Dockerfile 文件构建 grafana 镜像,要求基于 centos 完成 Grafana 服务 的安装与配置,并设置服务开机自启。上传grafana-6.4.1.linux-amd64.tar.gz包[root@k8s-master-node1 hyperf]#vim Dockerfile-grafanaFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD grafana-6.4.1.linux-amd64.tar.gz /root/EXPOSE 3000ENTRYPOINT ["./root/grafana-6.4.1/bin/grafana-server","-homepath","/root/grafana-6.4.1/"][root@k8s-master-node1 hyperf]#docker build -t monitor-grafana:v1.0 -f Dockerfile-grafana .[root@k8s-master-node1 hyperf]#docker run -d --name grafana-exam-jiance monitor-grafana:v1.0 && sleep 5 && docker exec grafana-exam-jiance ps -aux && docker rm -f grafana-exam-jiance2.2.4 容器化部署 Prometheus 编写 Dockerfile 文件构建 prometheus 镜像,要求基于 centos 完成 Promethues 服务的安装与配置,并设置服务开机自启。上传prometheus-2.13.0.linux-amd64.tar.gz并解压[root@k8s-master-node1 hyperf]#tar -zxvf prometheus-2.13.0.linux-amd64.tar.gz[root@k8s-master-node1 hyperf]#mv prometheus-2.13.0.linux-amd64/prometheus.yml /root/hyperf && rm -rf prometheus-2.13.0.linux-amd64[root@k8s-master-node1 hyperf]#vim Dockerfile-prometheusFROM centos:centos7.9.2009MAINTAINER ChinaskillsRUN rm -rf /etc/yum.repos.d/*ADD prometheus-2.13.0.linux-amd64.tar.gz /root/RUN mkdir -p /data/prometheus/COPY prometheus.yml /data/prometheus/EXPOSE 9090ENTRYPOINT ["./root/prometheus-2.13.0.linux-amd64/prometheus","--config.file","/data/prometheus/prometheus.yml"][root@k8s-master-node1 hyperf]#docker build -t monitor-prometheus:v1.0 -f Dockerfile-prometheus .[root@k8s-master-node1 hyperf]#vim prometheus.yml #改动- job_name: 'prometheus' static_configs: - targets: ['localhost:9090'] - job_name: 'node' static_configs: - targets: ['node:9100'] - job_name: 'alertmanager' static_configs: - targets: ['alertmanager:9093'] - job_name: 'node-exporter' static_configs: - targets: ['node:9100']2.2.5 编排部署 Prometheus编写 docker-compose.yaml 文件,使用镜像 exporter、alert、grafana 和 prometheus 完成监控系统的编排部署。[root@k8s-master-node1 hyperf]#vim docker-compose.yaml编排部署prometheusversion: '3'services: node: container_name: monitor-node image: monitor-exporter:v1.0 restart: always hostname: node ports: - 9100:9100 alertmanager: container_name: monitor-alertmanager image: monitor-alert:v1.0 depends_on: - node restart: always hostname: alertmanager links: - node ports: - 9093:9093 - 9094:9094 grafana: container_name: monitor-grafana image: monitor-grafana:v1.0 restart: always depends_on: - node - alertmanager hostname: grafana links: - node - alertmanager ports: - 3000:3000 prometheus: container_name: monitor-prometheus image: monitor-prometheus:v1.0 restart: always depends_on: - node - alertmanager - grafana hostname: prometheus links: - node - alertmanager - grafana ports: - 9090:9090[root@k8s-master-node1 ~]#docker-compose up -d 2.2.6 安装 Jenkins将 Jenkins 部署到 default 命名空间下。要求完成离线插件的安装,设置 Jenkins 的登录信息和授权策略。上传BlueOcean.tar.gz包[root@k8s-master-node1 ~]#tar -zxvf BlueOcean.tar.gz[root@k8s-master-node1 ~]#cd BlueOcean/images/[root@k8s-master-node1 images]# docker load -i java_8-jre.tar[root@k8s-master-node1 images]# docker load -i jenkins_jenkins_latest.tar[root@k8s-master-node1 images]# docker load -i gitlab_gitlab-ce_latest.tar[root@k8s-master-node1 images]# docker load -i maven_latest.tar[root@k8s-master-node1 images]# docker tag maven:latest 192.168.59.200/library/maven[root@k8s-master-node1 images]# docker login 192.168.59.200Username: adminPassword: (Harbor12345)WARNING! Your password will be stored unencrypted in /root/.docker/config.json.Configure a credential helper to remove this warning. Seehttps://docs.docker.com/engine/reference/commandline/login/#credentials-store[root@k8s-master-node1 images]# docker push 192.168.59.200/library/maven#安装Jenkins[root@k8s-master-node1 BlueOcean]# kubectl create ns devops[root@k8s-master-node1 BlueOcean]# kubectl create deployment jenkins -n devops --image=jenkins/jenkins:latest --port 8080 --dry-run -o yaml > jenkins.yaml[root@k8s-master-node1 BlueOcean]# vim jenkins.yaml # 进入添加apiVersion: apps/v1kind: Deploymentmetadata: creationTimestamp: null labels: app: jenkins name: jenkins namespace: devopsspec: replicas: 1 selector: matchLabels: app: jenkins strategy: {} template: metadata: creationTimestamp: null labels: app: jenkins spec: nodeName: k8s-master-node1 containers: - image: jenkins/jenkins:latest imagePullPolicy: IfNotPresent name: jenkins ports: - containerPort: 8080 name: jenkins8080 securityContext: runAsUser: 0 privileged: true volumeMounts: - name: jenkins-home mountPath: /home/jenkins_home/ - name: docker-home mountPath: /run/docker.sock - name: docker mountPath: /usr/bin/docker - name: kubectl mountPath: /usr/bin/kubectl - name: kube mountPath: /root/.kube volumes: - name: jenkins-home hostPath: path: /home/jenkins_home/ - name: docker-home hostPath: path: /run/docker.sock - name: docker hostPath: path: /usr/bin/docker - name: kubectl hostPath: path: /usr/bin/kubectl - name: kube hostPath: path: /root/.kube[root@k8s-master-node1 BlueOcean]# kubectl apply -f jenkins.yamldeployment.apps/jenkins created[root@k8s-master-node1 ~]# kubectl get pod -n devops NAME READY STATUS RESTARTS AGEjenkins-7d4f5696b7-hqw9d 1/1 Running 0 88s# 进入jenkins,确定docker和kubectl成功安装[root@k8s-master-node1 ~]# kubectl exec -it -n devops jenkins-7d4f5696b7-hqw9d bash[root@k8s-master-node1 BlueOcean]# kubectl expose deployment jenkins -n devops --port=8080 --target-port=30880 --dry-run -o yaml >> jenkins.yaml[root@k8s-master-node1 BlueOcean]# vim jenkins.yaml # 进入修改第二次粘贴在第一此的后面apiVersion: v1kind: Servicemetadata: creationTimestamp: null labels: app: jenkins name: jenkins namespace: devopsspec: ports: - port: 8080 protocol: TCP name: jenkins8080 nodePort: 30880 - name: jenkins port: 50000 nodePort: 30850 selector: app: jenkins type: NodePort[root@k8s-master-node1 BlueOcean]# kubectl apply -f jenkins.yamlservice/jenkins created[root@k8s-master-node1 ~]# kubectl get -n devops svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEjenkins NodePort 10.96.53.170 <none> 8080:30880/TCP 10s# 使用提供的软件包完成Blue Ocean等离线插件的安装[root@k8s-master-node1 BlueOcean]# kubectl -n devops cp plugins/ jenkins-7d4f5696b7-hqw9d:/var/jenkins_home/* *访问 ip:30880 进入jenkins*# 查看密码[root@k8s-master-node1 BlueOcean]# kubectl -n devops exec jenkins-7d4f5696b7-hqw9d --cat /var/jenkins_home/secrets/initialAdminPassword 2.2.7 安装 GitLab 将 GitLab 部署到 default 命名空间下,要求设置 root 用户密码,新建公开项 目,并将提供的代码上传到该项目。[root@k8s-master-node1 BlueOcean]# kubectl create deployment gitlab -n devops --image=gitlab/gitlab-ce:latest --port 80 --dry-run -o yaml > gitlab.yamlW0222 12:00:34.346609 25564 helpers.go:555] --dry-run is deprecated and can be replaced with --dry-run=client.[root@k8s-master-node1 BlueOcean]# vim gitlab.yamljitlab的配置文件apiVersion: apps/v1kind: Deploymentmetadata: creationTimestamp: null labels: app: gitlab name: gitlab namespace: devopsspec: replicas: 1 selector: matchLabels: app: gitlab strategy: {} template: metadata: creationTimestamp: null labels: app: gitlab spec: containers: - image: gitlab/gitlab-ce:latest imagePullPolicy: IfNotPresent name: gitlab-ce ports: - containerPort: 80 env: - name: GITLAB_ROOT_PASSWORD value: admin@123[root@k8s-master-node1 BlueOcean]# kubectl apply -f gitlab.yamldeployment.apps/gitlab created[root@k8s-master-node1 BlueOcean]# kubectl get pod -n devopsNAME READY STATUS RESTARTS AGEgitlab-5b47c8d994-8s9qb 1/1 Running 0 17sjenkins-bbf477c4f-55vgj 1/1 Running 2 (15m ago) 34m[root@k8s-master-node1 BlueOcean]# kubectl expose deployment gitlab -n devops --port=80 --target-port=30888 --dry-run=client -o yaml >> gitlab.yaml[root@k8s-master-node1 BlueOcean]# vim gitlab.yaml # 进入添加---apiVersion: v1kind: Servicemetadata: creationTimestamp: null labels: app: gitlab name: gitlab namespace: devopsspec: ports: - port: 80 nodePort: 30888 selector: app: gitlab type: NodePort[root@k8s-master-node1 BlueOcean]# kubectl apply -f gitlab.yamldeployment.apps/gitlab configuredservice/gitlab created[root@k8s-master-node1 BlueOcean]# kubectl get svc -n devopsNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEgitlab NodePort 10.96.149.160 <none> 80:30888/TCP 6sjenkins NodePort 10.96.174.123 <none> 8080:30880/TCP 8m7s# 等待gitlab启动,访问IP:30888 root , admin@123 登录 Gitlab* # 将springcloud文件夹中的代码上传到该项目,Gitlab提供了代码示例[root@k8s-master-node1 BlueOcean]# cd springcloud/[root@k8s-master-node1 springcloud]# git config --global user.name "Administrator"[root@k8s-master-node1 springcloud]# git config --global user.email "admin@example.com"[root@k8s-master-node1 springcloud]# git remote remove origin[root@k8s-master-node1 springcloud]# git remote add origin http://192.168.100.23:30888/root/springcloud.git[root@k8s-master-node1 springcloud]# git add .[root@k8s-master-node1 springcloud]# git commit -m "Initial commit"# On branch masternothing to commit, working directory clean[root@k8s-master-node1 springcloud]# git push -u origin masterUsername for 'http://192.168.100.23:30888': root Password for 'http://root@192.168.100.23:30888':(admin@123)Counting objects: 3192, done.Delta compression using up to 4 threads.Compressing objects: 100% (1428/1428), done.Writing objects: 100% (3192/3192), 1.40 MiB | 0 bytes/s, done.Total 3192 (delta 1233), reused 3010 (delta 1207)remote: Resolving deltas: 100% (1233/1233), done.To http://192.168.100.23:30888/root/springcloud.git * [new branch] master -> masterBranch master set up to track remote branch master from origin. 2.2.8 配置 Jenkins 与 GitLab 集成在 Jenkins 中新建流水线任务,配置 GitLab 连接 Jenkins,并完成 WebHook 的配置。 * 在 GitLab 中生成名为 jenkins 的“Access Tokens” * 返回 jenkins * 回到 Gitlab ,复制 token * 复制后填写到此 2.2.9 构建 CI/CD 环境在流水线任务中编写流水线脚本,完成后触发构建,要求基于 GitLab 中的 项目自动完成代码编译、镜像构建与推送、并自动发布服务到 Kubernetes 集群 中。# 创建命名空间[root@k8s-master-node1 ~]# kubectl create ns springcloud* *新建流水线* * *添加 Gitlab 用户密码* * Harbor 仓库创建公开项目 springcloud * *返回 Gitlab 准备编写流水线* # 添加映射[root@k8s-master-node1 ~]# cat /etc/hosts192.168.59.200 apiserver.cluster.local # 选择这一行# 进入jenkins 添加映射[root@k8s-master-node1 ~]# kubectl exec -it -n devops jenkins-bbf477c4f-55vgj bashkubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.root@jenkins-bbf477c4f-55vgj:/# echo "192.168.200.59 apiserver.cluster.local" >> /etc/hostsroot@jenkins-bbf477c4f-55vgj:/# cat /etc/hosts # 查看是否成功 # 编写流水线pipeline{ agent none stages{ stage('mvn-build'){ agent{ docker{ image '192.168.3.10/library/maven' args '-v /root/.m2:/root/.m2' } } steps{ sh 'cp -rvf /opt/repository /root/.m2' sh 'mvn package -DskipTests' } } stage('image-build'){ agent any steps{ sh 'cd gateway && docker build -t 192.168.3.10/springcloud/gateway -f Dockerfile .' sh 'cd config && docker build -t 192.168.3.10/springcloud/config -f Dockerfile .' sh 'docker login 192.168.3.10 -u=admin -p=Harbor12345' sh 'docker push 192.168.3.10/springcloud/gateway' sh 'docker push 192.168.3.10/springcloud/config' } } stage('cloud-deployment'){ agent any steps{ sh 'sed -i "s/sqshq\\/piggymetrics-gateway/192.168.3.10\\/springcloud\\/gateway/g" yaml/deployment/gateway-deployment.yaml' sh 'sed -i "s/sqshq\\/piggymetrics-config/192.168.3.10\\/springcloud\\/config/g" yaml/deployment/config-deployment.yaml' sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/deployment/gateway-deployment.yaml' sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/deployment/config-deployment.yaml' sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/svc/gateway-svc.yaml' sh 'kubectl apply -f /var/jenkins_home/workspace/springcloud/yaml/svc/config-svc.yaml' } } }}stages:代表整个流水线的所有执行阶段,通常stages只有1个,里面包含多个stage。stage:代表流水线中的某个阶段,可能出现n个。一般分为拉取代码,编译构建,部署等阶段。steps:代表一个阶段内需要执行的逻辑。steps里面是shell脚本,git拉取代码,ssh远程发布等任意内容。* *保存流水线文件,配置Webhook触发构建* * *取消勾选 SSL 选择, Add webhook 创建* * 创建成功进行测试,成功后返回 jenkins 会发现流水线已经开始自动构建 * 流水线执行成功 # 流水线构建的项目全部运行[root@k8s-master-node1 ~]# kubectl get pod -n springcloudNAME READY STATUS RESTARTS AGEconfig-77c74dd878-8kl4x 1/1 Running 0 28sgateway-5b46966894-twv5k 1/1 Running 1 (19s ago) 28s[root@k8s-master-node1 ~]# kubectl -n springcloud get serviceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEconfig NodePort 10.96.137.40 <none> 8888:30015/TCP 4m3sgateway NodePort 10.96.121.82 <none> 4000:30010/TCP 4m4s* *等待 PIg 微服务启动,访问 ip:30010 查看构建成功*2.2.10 服务网格:创建 Ingress Gateway将 Bookinfo 应用部署到 default 命名空间下,请为 Bookinfo 应用创建一个网 关,使外部可以访问 Bookinfo 应用。上传ServiceMesh.tar.gz包[root@k8s-master-node1 ~]# tar -zxvf ServiceMesh.tar.gz[root@k8s-master-node1 ~]# cd ServiceMesh/images/[root@k8s-master-node1 images]# docker load -i image.tar部署Bookinfo应用到kubernetes集群:[root@k8s-master-node1 images]# cd /root/ServiceMesh/[root@k8s-master-node1 ServiceMesh]# kubectl apply -f bookinfo/bookinfo.yamlservice/details createdserviceaccount/bookinfo-details createddeployment.apps/details-v1 createdservice/ratings createdserviceaccount/bookinfo-ratings createddeployment.apps/ratings-v1 createdservice/reviews createdserviceaccount/bookinfo-reviews createddeployment.apps/reviews-v1 createdservice/productpage createdserviceaccount/bookinfo-productpage createddeployment.apps/productpage-v1 created[root@k8s-master-node1 ServiceMesh]# kubectl get podNAME READY STATUS RESTARTS AGEdetails-v1-79f774bdb9-kndm9 1/1 Running 0 7sproductpage-v1-6b746f74dc-bswbx 1/1 Running 0 7sratings-v1-b6994bb9-6hqfn 1/1 Running 0 7sreviews-v1-545db77b95-j72x5 1/1 Running 0 7s[root@k8s-master-node1 ServiceMesh]# vim bookinfo-gateway.yamlapiVersion: networking.istio.io/v1alpha3kind: Gatewaymetadata: name: bookinfo-gatewayspec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*"---apiVersion: networking.istio.io/v1alpha3kind: VirtualServicemetadata: name: bookinfospec: hosts: - "*" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: # 定义路由转发目的地列表 - destination: host: productpage port: number: 9080[root@k8s-master-node1 ServiceMesh]# kubectl apply -f bookinfo-gateway.yamlgateway.networking.istio.io/bookinfo-gateway createdvirtualservice.networking.istio.io/bookinfo created[root@k8s-master-node1 ServiceMesh]#kubectl get VirtualService bookinfo -o yamlbookinfo-gateway || exact: /productpage || destination || host: productpage || number: 9080[root@k8s-master-node1 ServiceMesh]#kubectl get gateway bookinfo-gateway -o yamlistio: ingressgateway2.2.11 KubeVirt 运维:创建 VM使用提供的镜像在 kubevirt 命名空间下创建一台 VM,名称为 exam,指定 VM 的内存、CPU、网卡和磁盘等配置。[root@k8s-master-node1 ~]# kubectl explain kubevirt.spec. --recursive |grep use useEmulation <boolean>[root@k8s-master-node1 ~]# kubectl -n kubevirt edit kubevirtspec: certificateRotateStrategy: {} configuration: developerConfiguration: #{} useEmulation: true[root@k8s-master-node1 ~]# vim vm.yamlapiVersion: kubevirt.io/v1kind: VirtualMachinemetadata: name: examspec: running: true template: spec: domain: devices: disks: - name: vm disk: {} resources: requests: memory: 1Gi volumes: - name: vm containerDisk: image: fedora-virt:v1.0 imagePullPolicy: IfNotPresent[root@k8s-master-node1 ~]# kubectl apply -f vm.yamlvirtualmachine.kubevirt.io/exam created[root@k8s-master-node1 ~]# kubectl get virtualmachineNAME AGE STATUS READY exam 31s Running True[root@k8s-master-node1 ~]# kubectl delete -f vm.yamlvirtualmachine.kubevirt.io "exam" deleted
-
链接无法下载,这个问题怎么解决?https://repo.huaweicloud.com/openeuler/openEuler-22.03-LTS/ISO/aarch64/openEuler-22.03-LTS-everything-aarch64-dvd.iso
-
值此2025新年之际,Volcano新版本将会是一个新的里程碑,社区将在2025年引入一系列重大特性,继续深耕CNAI(Cloud Native AI 云原生AI)和大数据等领域,主要特性包括:AI场景: 网络拓扑感知调度: 降低训练任务间的网络传输开销,优化大模型训练场景下的性能。NPU卡调度和虚拟化能力: 提升NPU资源利用率。GPU卡动态切分能力: 提供MIG与MPS动态切分能力,提升GPU资源利用率。Volcano Global多集群AI作业调度: 支持跨集群的AI任务部署与拆分。断点续训与故障恢复能力优化: 支持更细粒度的作业重启策略。支持DRA:支持动态资源分配,灵活高效的管理异构资源。大数据场景:弹性层级队列能力: 帮助用户将大数据业务丝滑迁移到云原生平台。微服务场景:在离线混部与动态资源超卖: 提升资源利用率,同时保障在线业务QoS。负载感知调度与重调度: 提供资源碎片整理和负载均衡能力。Volcano v1.11的正式发布[1],标志着云原生批量计算迈入全新阶段!本次更新聚焦AI与大数据的核心需求,推出网络拓扑感知调度、多集群AI作业调度等重磅特性,显著提升AI训练与推理任务的性能。同时,在离线混部与动态资源超卖及负载感知重调度功能进一步优化资源利用率,确保在线业务的高可用性。此外,弹性层级队列为大数据场景提供了更灵活的调度策略。Volcano v1.11不仅是技术的飞跃,更是云原生批量计算领域的全新标杆!
-
容器镜像中心的镜像如何下载
-
用eSight_23.0.0_EulerOSV2R11-x86-64.iso创建云主机的时候一重复Please make a selection from the above这个提示
推荐直播
-
HDC深度解读系列 - Serverless与MCP融合创新,构建AI应用全新智能中枢2025/08/20 周三 16:30-18:00
张昆鹏 HCDG北京核心组代表
HDC2025期间,华为云展示了Serverless与MCP融合创新的解决方案,本期访谈直播,由华为云开发者专家(HCDE)兼华为云开发者社区组织HCDG北京核心组代表张鹏先生主持,华为云PaaS服务产品部 Serverless总监Ewen为大家深度解读华为云Serverless与MCP如何融合构建AI应用全新智能中枢
回顾中 -
关于RISC-V生态发展的思考2025/09/02 周二 17:00-18:00
中国科学院计算技术研究所副所长包云岗教授
中科院包云岗老师将在本次直播中,探讨处理器生态的关键要素及其联系,分享过去几年推动RISC-V生态建设实践过程中的经验与教训。
回顾中 -
一键搞定华为云万级资源,3步轻松管理企业成本2025/09/09 周二 15:00-16:00
阿言 华为云交易产品经理
本直播重点介绍如何一键续费万级资源,3步轻松管理成本,帮助提升日常管理效率!
回顾中
热门标签