PV静态供给明显的缺点是维护成本太高了! 因此,K8s开始支持PV动态供给,使用StorageClass对象实现。但是查询官方的文档,默认是不支持NFS存储的。这样就需要安装一个插件的方式,使用NFS的PV动态存储。[root@k8s-node2 nfs-client]# vi class.yamlapiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: managed-nfs-storageprovisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'parameters: archiveOnDelete: "true"[root@k8s-node2 nfs-client]# vi deployment.yamlapiVersion: v1kind: ServiceAccountmetadata: name: nfs-client-provisioner---kind: DeploymentapiVersion: apps/v1metadata: name: nfs-client-provisionerspec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 192.168.0.21//NFS服务器地址 - name: NFS_PATH value: /ifs/kubernetes//NFS目录 volumes: - name: nfs-client-root nfs: server: 192.168.0.21 path: /ifs/kubernetes[root@k8s-node2 nfs-client]# vi rbac.yamlkind: ServiceAccountapiVersion: v1metadata: name: nfs-client-provisioner---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: nfs-client-provisioner-runnerrules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: run-nfs-client-provisionersubjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: defaultroleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-client-provisionerrules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"]---kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-client-provisionersubjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultroleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io直接应用yaml:kubectl apply -f .[root@k8s-node2 ~]# kubectl get podNAME READY STATUS RESTARTS AGEdns-test 1/1 Running 0 83mnfs-client-provisioner-78c97f97c6-ndtsd 1/1 Running 0 11m---apiVersion: v1kind: PersistentVolumemetadata: name: pv-wp labels: name: pv-wpspec: capacity: storage: 10Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain hostPath: path: /wp---apiVersion: apps/v1kind: Deploymentmetadata: labels: app: chartmuseum name: chartmuseum namespace: chartmuseumspec: replicas: 1 selector: matchLabels: app: chartmuseum template: metadata: labels: app: chartmuseum spec: containers: - image: chartmuseum/chartmuseum:latest imagePullPolicy: IfNotPresent name: chartmuseum ports: - containerPort: 8080 protocol: TCP env: - name: DEBUG value: "1" - name: STORAGE value: local - name: STORAGE_LOCAL_ROOTDIR value: /charts volumeMounts: - mountPath: /charts name: charts-volume volumes: - name: charts-volume hostPath: path: /data/charts type: Directory---apiVersion: v1kind: Servicemetadata: name: chartmuseum namespace: chartmuseumspec: ports: - port: 8080 protocol: TCP targetPort: 8080 selector: app: chartmuseum使用k8s-allinone镜像启动云端主机环境后。等待一段时间,集群初始化完毕后,检查k8s集群状态并查看主机名,命令如下:```javascript[root@master ~]# kubectl get nodes,pod -ANAME STATUS ROLES AGE VERSIONnode/master Ready control-plane,master 5m14s v1.22.1NAMESPACE NAME READY STATUS RESTARTS AGEkube-dashboard pod/dashboard-7575cf67b7-5s5hz 1/1 Running 0 4m51skube-dashboard pod/dashboard-agent-69456b7f56-nzghp 1/1 Running 0 4m50skube-system pod/coredns-78fcd69978-klknx 1/1 Running 0 4m58skube-system pod/coredns-78fcd69978-xwzgr 1/1 Running 0 4m58skube-system pod/etcd-master 1/1 Running 0 5m14skube-system pod/kube-apiserver-master 1/1 Running 0 5m11skube-system pod/kube-controller-manager-master 1/1 Running 0 5m13skube-system pod/kube-flannel-ds-9gdnl 1/1 Running 0 4m51skube-system pod/kube-proxy-r7gq9 1/1 Running 0 4m58skube-system pod/kube-scheduler-master 1/1 Running 0 5m11skube-system pod/metrics-server-77564bc84d-tlrp7 1/1 Running 0 4m50s```由上面的执行结果可以看出集群和Pod状态都是正常的。查看当前节点主机名:```javascript[root@master ~]# hostnamectl Static hostname: master Icon name: computer-vm Chassis: vm Machine ID: cc2c86fe566741e6a2ff6d399c5d5daa Boot ID: 94e196b737b6430bac5fbc0af88cbcd1 Virtualization: kvm Operating System: CentOS Linux 7 (Core) CPE OS Name: cpe:/o:centos:centos:7 Kernel: Linux 3.10.0-1160.el7.x86_64 Architecture: x86-64```修改边端节点的主机名,命令如下:```javascript[root@localhost ~]# hostnamectl set-hostname kubeedge-node[root@kubeedge-node ~]# hostnamectl Static hostname: kubeedge-node Icon name: computer-vm Chassis: vm Machine ID: cc2c86fe566741e6a2ff6d399c5d5daa Boot ID: c788c13979e0404eb5afcd9b7bc8fd4b Virtualization: kvm Operating System: CentOS Linux 7 (Core) CPE OS Name: cpe:/o:centos:centos:7 Kernel: Linux 3.10.0-1160.el7.x86_64 Architecture: x86-64```分别配置云端节点和边端节点的主机映射文件,命令如下:```javascript[root@master ~]# cat >> /etc/hosts <<EOF10.26.17.135 master10.26.7.126 kubeedge-nodeEOF[root@kubeedge-node ~]# cat >> /etc/hosts <<EOF10.26.17.135 master10.26.7.126 kubeedge-nodeEOF```(2)云端、边端节点配置Yum源下载安装包kubernetes_kubeedge.tar.gz至云端master节点/root目录,并解压到/opt目录,命令如下:```javascript[root@master ~]# curl -O http://mirrors.douxuedu.com/KubeEdge/kubernetes_kubeedge_allinone.tar.gz[root@master ~]# tar -zxvf kubernetes_kubeedge_allinone.tar.gz -C /opt/[root@master ~]# ls docker-compose-Linux-x86_64 harbor-offline-installer-v2.5.0.tgz kubeedge kubernetes_kubeedge.tar.gzec-dashboard-sa.yaml k8simage kubeedge-counter-demo yum```在云端master节点配置yum源,命令如下:```javascript[root@master ~]# mv /etc/yum.repos.d/* /media/[root@master ~]# cat > /etc/yum.repos.d/local.repo <<EOF[docker]name=dockerbaseurl=file:///opt/yumgpgcheck=0enabled=1EOF[root@master ~]# yum -y install vsftpd[root@master ~]# echo anon_root=/opt >> /etc/vsftpd/vsftpd.conf```开启服务,并设置开机自启:```javascript[root@master ~]# systemctl enable vsftpd --now```边端kubeedge-node配置yum源,命令如下:```javascript[root@kubeedge-node ~]# mv /etc/yum.repos.d/* /media/[root@kubeedge-node ~]# cat >/etc/yum.repos.d/ftp.repo <<EOF[docker]name=dockerbaseurl=ftp://master/yumgpgcheck=0enabled=1EOF```(3)云端、边端配置Docker云端master节点已经安装好了Docker服务,需要配置本地镜像拉取,命令如下:```javascript[root@master ~]# vi /etc/docker/daemon.json { "log-driver": "json-file", "log-opts": { "max-size": "200m", "max-file": "5" }, "default-ulimits": { "nofile": { "Name": "nofile", "Hard": 655360, "Soft": 655360 }, "nproc": { "Name": "nproc", "Hard": 655360, "Soft": 655360 } }, "live-restore": true, "oom-score-adjust": -1000, "max-concurrent-downloads": 10, "max-concurrent-uploads": 10, "insecure-registries": ["0.0.0.0/0"]}[root@master ~]# systemctl daemon-reload [root@master ~]# systemctl restart docker```边端节点安装Docker,并配置本地镜像拉取,命令如下:```javascript[root@kubeedge-node ~]# yum -y install docker-ce[root@kubeedge-node ~]# vi /etc/docker/daemon.json{ "log-driver": "json-file", "log-opts": { "max-size": "200m", "max-file": "5" }, "default-ulimits": { "nofile": { "Name": "nofile", "Hard": 655360, "Soft": 655360 }, "nproc": { "Name": "nproc", "Hard": 655360, "Soft": 655360 } }, "live-restore": true, "oom-score-adjust": -1000, "max-concurrent-downloads": 10, "max-concurrent-uploads": 10, "insecure-registries": ["0.0.0.0/0"]}[root@kubeedge-node ~]# systemctl daemon-reload [root@kubeedge-node ~]# systemctl enable docker --now```(4)云端节点部署Harbor仓库在云端master节点上部署Harbor本地镜像仓库,命令如下:```javascript[root@master ~]# cd /opt/[root@master opt]# mv docker-compose-Linux-x86_64 /usr/bin/docker-compose[root@master opt]# tar -zxvf harbor-offline-installer-v2.5.0.tgz [root@master opt]# cd harbor && cp harbor.yml.tmpl harbor.yml[root@master harbor]# vi harbor.ymlhostname: 10.26.17.135 #将hostname修改为云端节点IP[root@master harbor]# ./install.sh……✔ ----Harbor has been installed and started successfully.----[root@master harbor]# docker login -u admin -p Harbor12345 master….Login Succeeded```打开浏览器使用云端master节点IP,访问Harbor页面,使用默认的用户名和密码进行登录(admin/Harbor12345),并创建一个名为“k8s”的命名空间,如下图所示:图2-1 创建k8s项目加载本地镜像并上传至Harbor镜像仓库,命令如下:```javascript[root@master harbor]# cd /opt/k8simage/ && sh load.sh[root@master k8simage]# sh push.sh 请输入您的Harbor仓库地址(不需要带http):10.26.17.135 #地址为云端master节点地址```(5)配置节点亲和性在云端节点分别配置flannel pod和proxy pod的亲和性,命令如下:```javascript[root@master k8simage]# kubectl edit daemonset -n kube-system kube-flannel-ds......spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux - key: node-role.kubernetes.io/edge #在containers标签前面增加配置 operator: DoesNotExist[root@master k8simage]# kubectl edit daemonset -n kube-system kube-proxyspec: affinity: #在containers标签前面增加配置 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io/edge operator: DoesNotExist[root@master k8simage]# kubectl get pod -n kube-systemNAME READY STATUS RESTARTS AGEkube-flannel-ds-q7mfq 1/1 Running 0 13mkube-proxy-wxhkm 1/1 Running 0 39s```可以查看到两个pod被修改完成之后重新运行,状态为Running。(6)Kubeedge云端环境搭建在云端master节点配置云端所需要的软件包,及服务配置文件,命令如下:```javascript[root@master k8simage]# cd /opt/kubeedge/[root@master kubeedge]# mv keadm /usr/bin/[root@master kubeedge]# mkdir /etc/kubeedge[root@master kubeedge]# tar -zxf kubeedge-1.11.1.tar.gz[root@master kubeedge]# cp -rf kubeedge-1.11.1/build/tools/* /etc/kubeedge/[root@master kubeedge]# cp -rf kubeedge-1.11.1/build/crds/ /etc/kubeedge/[root@master kubeedge]# tar -zxf kubeedge-v1.11.1-linux-amd64.tar.gz[root@master kubeedge]# cp -rf * /etc/kubeedge/```启动云端服务,命令如下:```javascript[root@master kubeedge]# cd /etc/kubeedge/[root@master kubeedge]# keadm deprecated init --kubeedge-version=1.11.1 --advertise-address=10.26.17.135 ……KubeEdge cloudcore is running, For logs visit: /var/log/kubeedge/cloudcore.logCloudCore started```● -kubeedge-version=:指定Kubeedge的版本,离线安装必须指定,否则会自动下载最新版本。● -advertise-address=:暴露IP,此处填写keadm所在的节点内网IP。如果要与本地集群对接的话,则填写公网IP。此处因为云上,所以只需要写内网IP。检查云端服务,命令如下:```javascript[root@master kubeedge]# netstat -ntpl |grep cloudcoretcp6 0 0 :::10000 :::* LISTEN 974/cloudcore tcp6 0 0 :::10002 :::* LISTEN 974/cloudcore ```(7)Kubeedge边缘端环境搭建在边缘端kubeedge-node节点复制云端软件包至本地,命令如下:```javascript[root@kubeedge-node ~]# scp root@master:/usr/bin/keadm /usr/local/bin/[root@kubeedge-node ~]# mkdir /etc/kubeedge[root@kubeedge-node ~]# cd /etc/kubeedge/[root@kubeedge-node kubeedge]# scp -r root@master:/etc/kubeedge/* /etc/kubeedge/```在云端master节点查询密钥,命令如下,复制的token值需要删掉换行符:```javascript[root@master kubeedge]# keadm gettoken1f0f213568007af1011199f65ca6405811573e44061c903d0f24c7c0379a5f65.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2OTEwNTc2ODN9.48eiBKuwwL8bFyQcfYyicnFSogra0Eh0IpyaRMg5NvY```在边端kubeedge-node使用命令加入集群,命令如下:```javascript[root@kubeedge-node ~]# keadm deprecated join --cloudcore-ipport=10.26.17.135:10000 --kubeedge-version=1.11.1 --token=1f0f213568007af1011199f65ca6405811573e44061c903d0f24c7c0379a5f65.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2OTEwNTc2ODN9.48eiBKuwwL8bFyQcfYyicnFSogra0Eh0IpyaRMg5NvYinstall MQTT service successfully.......[Run as service] service file already exisits in /etc/kubeedge//edgecore.service, skip downloadkubeedge-v1.11.1-linux-amd64/kubeedge-v1.11.1-linux-amd64/edge/kubeedge-v1.11.1-linux-amd64/edge/edgecorekubeedge-v1.11.1-linux-amd64/versionkubeedge-v1.11.1-linux-amd64/cloud/kubeedge-v1.11.1-linux-amd64/cloud/csidriver/kubeedge-v1.11.1-linux-amd64/cloud/csidriver/csidriverkubeedge-v1.11.1-linux-amd64/cloud/iptablesmanager/kubeedge-v1.11.1-linux-amd64/cloud/iptablesmanager/iptablesmanagerkubeedge-v1.11.1-linux-amd64/cloud/cloudcore/kubeedge-v1.11.1-linux-amd64/cloud/cloudcore/cloudcorekubeedge-v1.11.1-linux-amd64/cloud/controllermanager/kubeedge-v1.11.1-linux-amd64/cloud/controllermanager/controllermanagerkubeedge-v1.11.1-linux-amd64/cloud/admission/kubeedge-v1.11.1-linux-amd64/cloud/admission/admissionKubeEdge edgecore is running, For logs visit: journalctl -u edgecore.service -xe```如若提示yum报错,可删除多余yum源文件,重新执行加入集群命令:```javascript[root@kubeedge-node kubeedge]# rm -rf /etc/yum.repos.d/epel*```查看状态服务是否为active:```javascript[root@kubeedge-node kubeedge]# systemctl status edgecore● edgecore.service Loaded: loaded (/etc/systemd/system/edgecore.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2023-08-03 06:05:39 UTC; 15s ago Main PID: 8405 (edgecore) Tasks: 15 Memory: 34.3M CGroup: /system.slice/edgecore.service └─8405 /usr/local/bin/edgecore ```在云端master节点检查边缘端节点是否正常加入,命令如下:```javascript[root@master kubeedge]# kubectl get nodesNAME STATUS ROLES AGE VERSIONkubeedge-node Ready agent,edge 5m19s v1.22.6-kubeedge-v1.11.1master Ready control-plane,master 176m v1.22.1```若节点数量显示为两台,且状态为Ready,则证明节点加入成功。(8)云端节点部署监控服务在云端master节点配置证书,命令如下:```javascript[root@master kubeedge]# export CLOUDCOREIPS="10.26.17.135"```此处IP填写为云端master节点IP:```javascript[root@master kubeedge]# cd /etc/kubeedge/[root@master kubeedge]# ./certgen.sh stream```更新云端配置,使监控数据可以传送至云端master节点,命令如下:```javascript[root@master kubeedge]# vi /etc/kubeedge/config/cloudcore.yaml cloudStream: enable: true #修改为true streamPort: 10003router: address: 0.0.0.0 enable: true #修改为true port: 9443 restTimeout: 60```更新边缘端的配置,命令如下:```javascript[root@kubeedge-node kubeedge]# vi /etc/kubeedge/config/edgecore.yaml edgeStream: enable: true #修改为true handshakeTimeout: 30serviceBus: enable: true #修改为true```重新启动云端服务,命令如下:```javascript[root@master kubeedge]# kill -9 $(netstat -lntup |grep cloudcore |awk 'NR==1 {print $7}' |cut -d '/' -f 1)[root@master kubeedge]# cp -rfv cloudcore.service /usr/lib/systemd/system/[root@master kubeedge]# systemctl start cloudcore.service [root@master kubeedge]# netstat -lntup |grep 10003tcp6 0 0 :::10003 :::* LISTEN 15089/cloudcore```通过netstat -lntup |grep 10003查看端口,如果出现10003端口则表示成功开启cloudStream。重新启动边缘端服务,命令如下:```javascript[root@kubeedge-node kubeedge]# systemctl restart edgecore.service```在云端部署服务并查看收集指标,命令如下:```javascript[root@master kubeedge]# kubectl top nodesNAME CPU(cores) CPU% MEMORY(bytes) MEMORY% kubeedge-node 24m 0% 789Mi 6% master 278m 3% 8535Mi 54% ```服务部署后,需等待一段时间才能查看到kubeedge-node的资源使用情况,这时因为数据还未同步至云端节点。#### 2.1.2 安装依赖包首先安装gcc编译器,gcc有些系统版本已经默认安装,通过gcc -version查看,没安装的先安装gcc,不要缺少,否则有可能安装python出错,python3.7.0以下的版本可不装libffi-devel。在云端节点,下载离线yum源,安装软件,命令如下:```javascript[root@master ~]# curl -O http://mirrors.douxuedu.com/KubeEdge/python-kubeedge/gcc-repo.tar.gz[root@master ~]# tar -zxvf gcc-repo.tar.gz [root@master ~]# vi /etc/yum.repos.d/gcc.repo [gcc]name=gccbaseurl=file:///root/gcc-repogpgcheck=0enabled=1[root@master ~]# yum -y install zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel libffi-devel gcc```#### 2.1.3 编译安装Python3.7在云端节点下载python3.7等安装包,并进行解压编译,命令如下:```javascript[root@master ~]# curl -O http://mirrors.douxuedu.com/KubeEdge/python-kubeedge/Python-3.7.3.tar.gz[root@master ~]# curl -O http://mirrors.douxuedu.com/KubeEdge/python-kubeedge/volume_packages.tar.gz[root@master ~]# mkdir /usr/local/python3 && tar -zxvf Python-3.7.3.tar.gz [root@master ~]# cd Python-3.7.3/[root@master Python-3.7.3]# ./configure --prefix=/usr/local/python3[root@master Python-3.7.3]# make && make install[root@master Python-3.7.3]# cd /root```#### 2.1.4 建立Python软链接解压volume_packages压缩包,然后将编译后的python3.7软连接至/usr/bin目录下,并查看版本信息,命令如下:```javascript[root@master ~]# tar -zxvf volume_packages.tar.gz [root@master ~]# yes |mv volume_packages/site-packages/* /usr/local/python3/lib/python3.7/site-packages/[root@master ~]# ln -s /usr/local/python3/bin/python3.7 /usr/bin/python3[root@master ~]# ln -s /usr/local/python3/bin/pip3.7 /usr/bin/pip3[root@master ~]# python3 --versionPython 3.7.3[root@master ~]# pip3 listPackage Version ------------------------ --------------------absl-py 1.4.0 aiohttp 3.8.4 aiosignal 1.3.1 anyio 3.7.0 async-timeout 4.0.2 asynctest 0.13.0 ......以下内容忽略......```### 2.2 搭建MongoDB#### 2.2.1 搭建MongoDB将mongoRepo.tar.gz软件包放到边侧节点中,然后进行解压,命令如下:```javascript[root@kubeedge-node ~]# curl -O http://mirrors.douxuedu.com/KubeEdge/python-kubeedge/mongoRepo.tar.gz[root@kubeedge-node ~]# tar -zxvf mongoRepo.tar.gz -C /opt/[root@kubeedge-node ~]# vi /etc/yum.repos.d/mongo.repo[mongo]name=mongoenabled=1gpgcheck=0baseurl=file:///opt/mongoRepo```在边侧节点安装mongodb,命令如下:```javascript[root@kubeedge-node ~]# yum -y install mongodb*```安装完成后配置mongo,命令如下:```javascript[root@kubeedge-node ~]# vi /etc/mongod.conf #找到下面的字段然后进行修改net: port: 27017 bindIp: 0.0.0.0 #修改为0.0.0.0```修改完毕后,重启服务,命令如下:```javascript[root@kubeedge-node ~]# systemctl restart mongod && systemctl enable mongod```验证服务,命令如下:```javascript[root@kubeedge-node ~]# netstat -lntup |grep 27017tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 10195/mongod ```若出现27017端口,则MongoDB服务启动成功。#### 2.2.2 创建数据库边侧节点登录MongoDB,创建数据库与集合,命令如下:```javascript[root@kubeedge-node ~]# mongo> show dbsadmin 0.000GBconfig 0.000GBlocal 0.000GB> use edgesqlswitched to db edgesql> show collections> db.createCollection("users"){ "ok" : 1 }> db.createCollection("ai_data"){ "ok" : 1 }> db.createCollection("ai_model"){ "ok" : 1 }> show collectionsai_dataai_modelusers>#按键盘上的Ctrl+D可退出```### 2.3 搭建H5前端ydy_cloudapp_front_dist是编译后的前端H5程序,通过Web Server运行即可。#### 2.3.1 Linux运行H5前端在边侧节点下载gcc-repo和ydy_cloudapp_front_dist压缩包并进行解压,配置Yum源并将解压后的文件拷贝至Nginx站点目录,命令如下:```javascript[root@kubeedge-node ~]# curl -O http://mirrors.douxuedu.com/KubeEdge/python-kubeedge/gcc-repo.tar.gz[root@kubeedge-node ~]# curl -O http://mirrors.douxuedu.com/KubeEdge/python-kubeedge/ydy_cloudapp_front_dist.tar.gz[root@kubeedge-node ~]# tar -zxvf gcc-repo.tar.gz [root@kubeedge-node ~]# tar -zxvf ydy_cloudapp_front_dist.tar.gz [root@kubeedge-node ~]# vi /etc/yum.repos.d/gcc.repo[gcc]name=gccbaseurl=file:///root/gcc-repogpgcheck=0enabled=1[root@kubeedge-node ~]# yum install -y nginx[root@kubeedge-node ~]# rm -rf /usr/share/nginx/html/*[root@kubeedge-node ~]# mv ydy_cloudapp_front_dist/index.html /usr/share/nginx/html/[root@kubeedge-node ~]# mv ydy_cloudapp_front_dist/static/ /usr/share/nginx/html/[root@kubeedge-node ~]# vi /etc/nginx/nginx.conf#配置nginx反向代理,进入配置文件在文件下方找到相应的位置进行配置server { listen 80; listen [::]:80; server_name localhost; root /usr/share/nginx/html; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; error_page 404 /404.html; location = /404.html { } location ~ ^/cloudedge/(.*) { proxy_pass http://10.26.17.135:30850/cloudedge/$1; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'Authorization, Content-Type'; add_header 'Access-Control-Allow-Credentials' 'true'; } error_page 500 502 503 504 /50x.html; location = /50x.html { } }[root@kubeedge-node ~]# nginx -tnginx: the configuration file /etc/nginx/nginx.conf syntax is oknginx: configuration file /etc/nginx/nginx.conf test is successful[root@kubeedge-node ~]# systemctl restart nginx && systemctl enable nginx```
yd_219475889
发表于2024-12-13 20:10:15
2024-12-13 20:10:15
最后回复
43